I0130 21:08:58.641535 8 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0130 21:08:58.642087 8 e2e.go:109] Starting e2e run "4e0e84fb-f3f9-4c79-8dce-3815ab320190" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580418537 - Will randomize all specs Will run 278 of 4814 specs Jan 30 21:08:58.703: INFO: >>> kubeConfig: /root/.kube/config Jan 30 21:08:58.706: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 30 21:08:58.733: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 30 21:08:58.765: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 30 21:08:58.765: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 30 21:08:58.765: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 30 21:08:58.779: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 30 21:08:58.779: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 30 21:08:58.779: INFO: e2e test version: v1.17.0 Jan 30 21:08:58.780: INFO: kube-apiserver version: v1.17.0 Jan 30 21:08:58.780: INFO: >>> kubeConfig: /root/.kube/config Jan 30 21:08:58.785: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:08:58.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 30 21:08:58.916: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:08:58.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9" in namespace "projected-1535" to be "success or failure" Jan 30 21:08:58.963: INFO: Pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.286413ms Jan 30 21:09:00.973: INFO: Pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038501231s Jan 30 21:09:02.980: INFO: Pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046225019s Jan 30 21:09:04.986: INFO: Pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052318511s STEP: Saw pod success Jan 30 21:09:04.987: INFO: Pod "downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9" satisfied condition "success or failure" Jan 30 21:09:05.020: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9 container client-container: STEP: delete the pod Jan 30 21:09:05.098: INFO: Waiting for pod downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9 to disappear Jan 30 21:09:05.111: INFO: Pod downwardapi-volume-2153d4f7-483b-4eff-ab9e-7bdc7fe53dd9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:05.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1535" for this suite. • [SLOW TEST:6.377 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":5,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:05.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 30 21:09:06.716: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 30 21:09:08.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:09:10.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:09:12.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015346, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:09:15.778: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:09:15.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:16.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8158" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.668 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":2,"skipped":20,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:16.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:09:16.905: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:17.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9606" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":3,"skipped":29,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:17.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 30 21:09:18.105: INFO: Waiting up to 5m0s for pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b" in namespace "emptydir-9248" to be "success or failure" Jan 30 21:09:18.229: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 124.19011ms Jan 30 21:09:20.234: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128576139s Jan 30 21:09:22.251: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146317057s Jan 30 21:09:24.256: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150693513s Jan 30 21:09:26.260: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.15540863s STEP: Saw pod success Jan 30 21:09:26.261: INFO: Pod "pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b" satisfied condition "success or failure" Jan 30 21:09:26.264: INFO: Trying to get logs from node jerma-node pod pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b container test-container: STEP: delete the pod Jan 30 21:09:26.298: INFO: Waiting for pod pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b to disappear Jan 30 21:09:26.305: INFO: Pod pod-e7135d32-d3ba-4b93-a9d5-c88945fe5c4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:26.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9248" for this suite. • [SLOW TEST:8.720 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":30,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:26.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-85d0c0f4-e685-4c9c-812d-e7a5ecdfa8f9 STEP: Creating a pod to test consume configMaps Jan 30 21:09:26.575: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c" in namespace "projected-8541" to be "success or failure" Jan 30 21:09:26.602: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.070762ms Jan 30 21:09:28.616: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040211252s Jan 30 21:09:30.624: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048273188s Jan 30 21:09:32.631: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055673298s Jan 30 21:09:34.636: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060791635s STEP: Saw pod success Jan 30 21:09:34.636: INFO: Pod "pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c" satisfied condition "success or failure" Jan 30 21:09:34.640: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c container projected-configmap-volume-test: STEP: delete the pod Jan 30 21:09:34.722: INFO: Waiting for pod pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c to disappear Jan 30 21:09:34.730: INFO: Pod pod-projected-configmaps-cef550f2-5d00-40d3-a2ce-97d478570e0c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:34.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8541" for this suite. • [SLOW TEST:8.408 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":48,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:34.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-2dcd1765-ac14-4e89-b083-5d395991fd08 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:09:34.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8134" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":6,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:09:34.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-98b44829-b4c5-46f4-8158-9b5229a5d5bc in namespace container-probe-9483 Jan 30 21:09:40.949: INFO: Started pod liveness-98b44829-b4c5-46f4-8158-9b5229a5d5bc in namespace container-probe-9483 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 21:09:40.965: INFO: Initial restart count of pod liveness-98b44829-b4c5-46f4-8158-9b5229a5d5bc is 0 Jan 30 21:10:03.070: INFO: Restart count of pod container-probe-9483/liveness-98b44829-b4c5-46f4-8158-9b5229a5d5bc is now 1 (22.104366834s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:10:03.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9483" for this suite. • [SLOW TEST:28.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":104,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:10:03.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-cfkm STEP: Creating a pod to test atomic-volume-subpath Jan 30 21:10:03.325: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cfkm" in namespace "subpath-917" to be "success or failure" Jan 30 21:10:03.338: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Pending", Reason="", readiness=false. Elapsed: 13.149905ms Jan 30 21:10:05.613: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288370952s Jan 30 21:10:07.619: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294620911s Jan 30 21:10:09.627: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302468515s Jan 30 21:10:11.635: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.309865531s Jan 30 21:10:13.641: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.316134515s Jan 30 21:10:15.646: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.321594826s Jan 30 21:10:17.652: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.327332087s Jan 30 21:10:19.662: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.33719026s Jan 30 21:10:21.669: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.344617578s Jan 30 21:10:23.680: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.355594226s Jan 30 21:10:25.690: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.365489923s Jan 30 21:10:27.698: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 24.372737295s Jan 30 21:10:29.705: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 26.379978381s Jan 30 21:10:31.722: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Running", Reason="", readiness=true. Elapsed: 28.397372407s Jan 30 21:10:33.779: INFO: Pod "pod-subpath-test-configmap-cfkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.454121679s STEP: Saw pod success Jan 30 21:10:33.779: INFO: Pod "pod-subpath-test-configmap-cfkm" satisfied condition "success or failure" Jan 30 21:10:33.789: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-cfkm container test-container-subpath-configmap-cfkm: STEP: delete the pod Jan 30 21:10:33.866: INFO: Waiting for pod pod-subpath-test-configmap-cfkm to disappear Jan 30 21:10:33.906: INFO: Pod pod-subpath-test-configmap-cfkm no longer exists STEP: Deleting pod pod-subpath-test-configmap-cfkm Jan 30 21:10:33.907: INFO: Deleting pod "pod-subpath-test-configmap-cfkm" in namespace "subpath-917" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:10:33.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-917" for this suite. • [SLOW TEST:30.804 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":8,"skipped":115,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:10:33.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:10:34.076: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189" in namespace "projected-913" to be "success or failure" Jan 30 21:10:34.099: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189": Phase="Pending", Reason="", readiness=false. Elapsed: 23.161286ms Jan 30 21:10:36.112: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035966262s Jan 30 21:10:38.120: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04397566s Jan 30 21:10:40.129: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052802086s Jan 30 21:10:42.138: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062062024s STEP: Saw pod success Jan 30 21:10:42.138: INFO: Pod "downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189" satisfied condition "success or failure" Jan 30 21:10:42.141: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189 container client-container: STEP: delete the pod Jan 30 21:10:42.181: INFO: Waiting for pod downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189 to disappear Jan 30 21:10:42.185: INFO: Pod downwardapi-volume-9da82754-88f9-4973-bca2-b0e65ccd3189 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:10:42.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-913" for this suite. • [SLOW TEST:8.272 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":122,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:10:42.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:10:42.292: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 30 21:10:47.318: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 30 21:10:49.329: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 30 21:10:51.335: INFO: Creating deployment "test-rollover-deployment" Jan 30 21:10:51.349: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 30 21:10:53.363: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 30 21:10:53.372: INFO: Ensure that both replica sets have 1 created replica Jan 30 21:10:53.381: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 30 21:10:53.393: INFO: Updating deployment test-rollover-deployment Jan 30 21:10:53.393: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 30 21:10:55.419: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 30 21:10:55.430: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 30 21:10:55.440: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:10:55.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015453, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:10:57.453: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:10:57.453: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015453, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:10:59.451: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:10:59.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015453, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:01.452: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:11:01.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:03.451: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:11:03.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:05.455: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:11:05.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:07.462: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:11:07.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:09.447: INFO: all replica sets need to contain the pod-template-hash label Jan 30 21:11:09.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015460, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015451, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:11:11.460: INFO: Jan 30 21:11:11.460: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jan 30 21:11:11.475: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9785 /apis/apps/v1/namespaces/deployment-9785/deployments/test-rollover-deployment aa99ae0e-bd92-47c0-99d8-7f62eb25c9f9 5364362 2 2020-01-30 21:10:51 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023fb638 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-30 21:10:51 +0000 UTC,LastTransitionTime:2020-01-30 21:10:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-30 21:11:10 +0000 UTC,LastTransitionTime:2020-01-30 21:10:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 30 21:11:11.489: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9785 /apis/apps/v1/namespaces/deployment-9785/replicasets/test-rollover-deployment-574d6dfbff 3b6f1c17-e314-4038-81c1-6523ad6b6fa7 5364350 2 2020-01-30 21:10:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment aa99ae0e-bd92-47c0-99d8-7f62eb25c9f9 0xc0021c8a27 0xc0021c8a28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021c8ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 30 21:11:11.489: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 30 21:11:11.489: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9785 /apis/apps/v1/namespaces/deployment-9785/replicasets/test-rollover-controller e9900cf9-da42-4be3-b08b-7bf424ca4778 5364361 2 2020-01-30 21:10:42 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment aa99ae0e-bd92-47c0-99d8-7f62eb25c9f9 0xc0021c8857 0xc0021c8858}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0021c8988 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 30 21:11:11.489: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9785 /apis/apps/v1/namespaces/deployment-9785/replicasets/test-rollover-deployment-f6c94f66c 4cd351eb-ae8f-4efc-bf3e-05df8cb87916 5364306 2 2020-01-30 21:10:51 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment aa99ae0e-bd92-47c0-99d8-7f62eb25c9f9 0xc0021c8b50 0xc0021c8b51}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021c8c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 30 21:11:11.495: INFO: Pod "test-rollover-deployment-574d6dfbff-5fcm7" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5fcm7 test-rollover-deployment-574d6dfbff- deployment-9785 /api/v1/namespaces/deployment-9785/pods/test-rollover-deployment-574d6dfbff-5fcm7 53f8a104-c229-4aae-9fc1-a109d46405ee 5364326 0 2020-01-30 21:10:53 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 3b6f1c17-e314-4038-81c1-6523ad6b6fa7 0xc0021c9467 0xc0021c9468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cc2gv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cc2gv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cc2gv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:10:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:11:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:10:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-30 21:10:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:10:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://1fa33a52045307b63eb01e2c0237b63fccc27ba7fe4254aa5f36a6b95daa395d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:11:11.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9785" for this suite. • [SLOW TEST:29.319 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":10,"skipped":181,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:11:11.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:11:11.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b" in namespace "projected-7164" to be "success or failure" Jan 30 21:11:11.762: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Pending", Reason="", readiness=false. Elapsed: 83.498831ms Jan 30 21:11:13.768: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090131261s Jan 30 21:11:15.777: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099236008s Jan 30 21:11:17.787: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10872849s Jan 30 21:11:19.807: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12891898s Jan 30 21:11:21.813: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134573381s STEP: Saw pod success Jan 30 21:11:21.813: INFO: Pod "downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b" satisfied condition "success or failure" Jan 30 21:11:21.827: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b container client-container: STEP: delete the pod Jan 30 21:11:21.881: INFO: Waiting for pod downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b to disappear Jan 30 21:11:21.926: INFO: Pod downwardapi-volume-c9b8f1a4-f405-49eb-8438-3015dbc4583b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:11:21.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7164" for this suite. • [SLOW TEST:10.419 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":197,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:11:21.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 30 21:11:42.129: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:42.130: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:42.179137 8 log.go:172] (0xc001b8bd90) (0xc002939ae0) Create stream I0130 21:11:42.179316 8 log.go:172] (0xc001b8bd90) (0xc002939ae0) Stream added, broadcasting: 1 I0130 21:11:42.183367 8 log.go:172] (0xc001b8bd90) Reply frame received for 1 I0130 21:11:42.183409 8 log.go:172] (0xc001b8bd90) (0xc002861ae0) Create stream I0130 21:11:42.183416 8 log.go:172] (0xc001b8bd90) (0xc002861ae0) Stream added, broadcasting: 3 I0130 21:11:42.184488 8 log.go:172] (0xc001b8bd90) Reply frame received for 3 I0130 21:11:42.184509 8 log.go:172] (0xc001b8bd90) (0xc002861b80) Create stream I0130 21:11:42.184516 8 log.go:172] (0xc001b8bd90) (0xc002861b80) Stream added, broadcasting: 5 I0130 21:11:42.185553 8 log.go:172] (0xc001b8bd90) Reply frame received for 5 I0130 21:11:42.236115 8 log.go:172] (0xc001b8bd90) Data frame received for 3 I0130 21:11:42.236233 8 log.go:172] (0xc002861ae0) (3) Data frame handling I0130 21:11:42.236249 8 log.go:172] (0xc002861ae0) (3) Data frame sent I0130 21:11:42.313931 8 log.go:172] (0xc001b8bd90) Data frame received for 1 I0130 21:11:42.314495 8 log.go:172] (0xc001b8bd90) (0xc002861b80) Stream removed, broadcasting: 5 I0130 21:11:42.314832 8 log.go:172] (0xc002939ae0) (1) Data frame handling I0130 21:11:42.314923 8 log.go:172] (0xc002939ae0) (1) Data frame sent I0130 21:11:42.315042 8 log.go:172] (0xc001b8bd90) (0xc002861ae0) Stream removed, broadcasting: 3 I0130 21:11:42.315212 8 log.go:172] (0xc001b8bd90) (0xc002939ae0) Stream removed, broadcasting: 1 I0130 21:11:42.315289 8 log.go:172] (0xc001b8bd90) Go away received I0130 21:11:42.316896 8 log.go:172] (0xc001b8bd90) (0xc002939ae0) Stream removed, broadcasting: 1 I0130 21:11:42.316924 8 log.go:172] (0xc001b8bd90) (0xc002861ae0) Stream removed, broadcasting: 3 I0130 21:11:42.316969 8 log.go:172] (0xc001b8bd90) (0xc002861b80) Stream removed, broadcasting: 5 Jan 30 21:11:42.316: INFO: Exec stderr: "" Jan 30 21:11:42.317: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:42.317: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:42.348896 8 log.go:172] (0xc001c4a580) (0xc0026c8280) Create stream I0130 21:11:42.348989 8 log.go:172] (0xc001c4a580) (0xc0026c8280) Stream added, broadcasting: 1 I0130 21:11:42.351472 8 log.go:172] (0xc001c4a580) Reply frame received for 1 I0130 21:11:42.351519 8 log.go:172] (0xc001c4a580) (0xc002861c20) Create stream I0130 21:11:42.351534 8 log.go:172] (0xc001c4a580) (0xc002861c20) Stream added, broadcasting: 3 I0130 21:11:42.352765 8 log.go:172] (0xc001c4a580) Reply frame received for 3 I0130 21:11:42.352792 8 log.go:172] (0xc001c4a580) (0xc002861cc0) Create stream I0130 21:11:42.352809 8 log.go:172] (0xc001c4a580) (0xc002861cc0) Stream added, broadcasting: 5 I0130 21:11:42.354089 8 log.go:172] (0xc001c4a580) Reply frame received for 5 I0130 21:11:42.412475 8 log.go:172] (0xc001c4a580) Data frame received for 3 I0130 21:11:42.412523 8 log.go:172] (0xc002861c20) (3) Data frame handling I0130 21:11:42.412551 8 log.go:172] (0xc002861c20) (3) Data frame sent I0130 21:11:42.488942 8 log.go:172] (0xc001c4a580) Data frame received for 1 I0130 21:11:42.489197 8 log.go:172] (0xc0026c8280) (1) Data frame handling I0130 21:11:42.489215 8 log.go:172] (0xc0026c8280) (1) Data frame sent I0130 21:11:42.489392 8 log.go:172] (0xc001c4a580) (0xc002861cc0) Stream removed, broadcasting: 5 I0130 21:11:42.489471 8 log.go:172] (0xc001c4a580) (0xc0026c8280) Stream removed, broadcasting: 1 I0130 21:11:42.489519 8 log.go:172] (0xc001c4a580) (0xc002861c20) Stream removed, broadcasting: 3 I0130 21:11:42.489552 8 log.go:172] (0xc001c4a580) Go away received I0130 21:11:42.489753 8 log.go:172] (0xc001c4a580) (0xc0026c8280) Stream removed, broadcasting: 1 I0130 21:11:42.489765 8 log.go:172] (0xc001c4a580) (0xc002861c20) Stream removed, broadcasting: 3 I0130 21:11:42.489785 8 log.go:172] (0xc001c4a580) (0xc002861cc0) Stream removed, broadcasting: 5 Jan 30 21:11:42.489: INFO: Exec stderr: "" Jan 30 21:11:42.489: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:42.490: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:42.533059 8 log.go:172] (0xc001d4a630) (0xc002939cc0) Create stream I0130 21:11:42.533245 8 log.go:172] (0xc001d4a630) (0xc002939cc0) Stream added, broadcasting: 1 I0130 21:11:42.538777 8 log.go:172] (0xc001d4a630) Reply frame received for 1 I0130 21:11:42.538865 8 log.go:172] (0xc001d4a630) (0xc002861d60) Create stream I0130 21:11:42.538879 8 log.go:172] (0xc001d4a630) (0xc002861d60) Stream added, broadcasting: 3 I0130 21:11:42.540294 8 log.go:172] (0xc001d4a630) Reply frame received for 3 I0130 21:11:42.540320 8 log.go:172] (0xc001d4a630) (0xc002861e00) Create stream I0130 21:11:42.540325 8 log.go:172] (0xc001d4a630) (0xc002861e00) Stream added, broadcasting: 5 I0130 21:11:42.541643 8 log.go:172] (0xc001d4a630) Reply frame received for 5 I0130 21:11:42.617036 8 log.go:172] (0xc001d4a630) Data frame received for 3 I0130 21:11:42.617121 8 log.go:172] (0xc002861d60) (3) Data frame handling I0130 21:11:42.617153 8 log.go:172] (0xc002861d60) (3) Data frame sent I0130 21:11:42.692944 8 log.go:172] (0xc001d4a630) (0xc002861d60) Stream removed, broadcasting: 3 I0130 21:11:42.693050 8 log.go:172] (0xc001d4a630) Data frame received for 1 I0130 21:11:42.693066 8 log.go:172] (0xc002939cc0) (1) Data frame handling I0130 21:11:42.693078 8 log.go:172] (0xc002939cc0) (1) Data frame sent I0130 21:11:42.693140 8 log.go:172] (0xc001d4a630) (0xc002939cc0) Stream removed, broadcasting: 1 I0130 21:11:42.693173 8 log.go:172] (0xc001d4a630) (0xc002861e00) Stream removed, broadcasting: 5 I0130 21:11:42.693225 8 log.go:172] (0xc001d4a630) Go away received I0130 21:11:42.693392 8 log.go:172] (0xc001d4a630) (0xc002939cc0) Stream removed, broadcasting: 1 I0130 21:11:42.693401 8 log.go:172] (0xc001d4a630) (0xc002861d60) Stream removed, broadcasting: 3 I0130 21:11:42.693406 8 log.go:172] (0xc001d4a630) (0xc002861e00) Stream removed, broadcasting: 5 Jan 30 21:11:42.693: INFO: Exec stderr: "" Jan 30 21:11:42.693: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:42.693: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:42.727739 8 log.go:172] (0xc001542370) (0xc0020600a0) Create stream I0130 21:11:42.727915 8 log.go:172] (0xc001542370) (0xc0020600a0) Stream added, broadcasting: 1 I0130 21:11:42.732792 8 log.go:172] (0xc001542370) Reply frame received for 1 I0130 21:11:42.733004 8 log.go:172] (0xc001542370) (0xc00240c780) Create stream I0130 21:11:42.733079 8 log.go:172] (0xc001542370) (0xc00240c780) Stream added, broadcasting: 3 I0130 21:11:42.735147 8 log.go:172] (0xc001542370) Reply frame received for 3 I0130 21:11:42.735196 8 log.go:172] (0xc001542370) (0xc0026c8320) Create stream I0130 21:11:42.735224 8 log.go:172] (0xc001542370) (0xc0026c8320) Stream added, broadcasting: 5 I0130 21:11:42.738316 8 log.go:172] (0xc001542370) Reply frame received for 5 I0130 21:11:42.800585 8 log.go:172] (0xc001542370) Data frame received for 3 I0130 21:11:42.800740 8 log.go:172] (0xc00240c780) (3) Data frame handling I0130 21:11:42.800782 8 log.go:172] (0xc00240c780) (3) Data frame sent I0130 21:11:42.898876 8 log.go:172] (0xc001542370) (0xc00240c780) Stream removed, broadcasting: 3 I0130 21:11:42.899050 8 log.go:172] (0xc001542370) Data frame received for 1 I0130 21:11:42.899068 8 log.go:172] (0xc0020600a0) (1) Data frame handling I0130 21:11:42.899112 8 log.go:172] (0xc0020600a0) (1) Data frame sent I0130 21:11:42.899138 8 log.go:172] (0xc001542370) (0xc0026c8320) Stream removed, broadcasting: 5 I0130 21:11:42.899195 8 log.go:172] (0xc001542370) (0xc0020600a0) Stream removed, broadcasting: 1 I0130 21:11:42.899619 8 log.go:172] (0xc001542370) (0xc0020600a0) Stream removed, broadcasting: 1 I0130 21:11:42.899641 8 log.go:172] (0xc001542370) (0xc00240c780) Stream removed, broadcasting: 3 I0130 21:11:42.899653 8 log.go:172] (0xc001542370) (0xc0026c8320) Stream removed, broadcasting: 5 I0130 21:11:42.899921 8 log.go:172] (0xc001542370) Go away received Jan 30 21:11:42.900: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 30 21:11:42.900: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:42.900: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:42.945494 8 log.go:172] (0xc002088580) (0xc00240cb40) Create stream I0130 21:11:42.945674 8 log.go:172] (0xc002088580) (0xc00240cb40) Stream added, broadcasting: 1 I0130 21:11:42.950730 8 log.go:172] (0xc002088580) Reply frame received for 1 I0130 21:11:42.950868 8 log.go:172] (0xc002088580) (0xc0026c83c0) Create stream I0130 21:11:42.950883 8 log.go:172] (0xc002088580) (0xc0026c83c0) Stream added, broadcasting: 3 I0130 21:11:42.951948 8 log.go:172] (0xc002088580) Reply frame received for 3 I0130 21:11:42.952000 8 log.go:172] (0xc002088580) (0xc002939e00) Create stream I0130 21:11:42.952018 8 log.go:172] (0xc002088580) (0xc002939e00) Stream added, broadcasting: 5 I0130 21:11:42.953465 8 log.go:172] (0xc002088580) Reply frame received for 5 I0130 21:11:43.017784 8 log.go:172] (0xc002088580) Data frame received for 3 I0130 21:11:43.017847 8 log.go:172] (0xc0026c83c0) (3) Data frame handling I0130 21:11:43.017868 8 log.go:172] (0xc0026c83c0) (3) Data frame sent I0130 21:11:43.086769 8 log.go:172] (0xc002088580) (0xc0026c83c0) Stream removed, broadcasting: 3 I0130 21:11:43.086904 8 log.go:172] (0xc002088580) Data frame received for 1 I0130 21:11:43.086923 8 log.go:172] (0xc00240cb40) (1) Data frame handling I0130 21:11:43.086932 8 log.go:172] (0xc00240cb40) (1) Data frame sent I0130 21:11:43.086940 8 log.go:172] (0xc002088580) (0xc002939e00) Stream removed, broadcasting: 5 I0130 21:11:43.086972 8 log.go:172] (0xc002088580) (0xc00240cb40) Stream removed, broadcasting: 1 I0130 21:11:43.086983 8 log.go:172] (0xc002088580) Go away received I0130 21:11:43.087249 8 log.go:172] (0xc002088580) (0xc00240cb40) Stream removed, broadcasting: 1 I0130 21:11:43.087280 8 log.go:172] (0xc002088580) (0xc0026c83c0) Stream removed, broadcasting: 3 I0130 21:11:43.087291 8 log.go:172] (0xc002088580) (0xc002939e00) Stream removed, broadcasting: 5 Jan 30 21:11:43.087: INFO: Exec stderr: "" Jan 30 21:11:43.087: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:43.087: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:43.125761 8 log.go:172] (0xc001c4ac60) (0xc0026c8640) Create stream I0130 21:11:43.125954 8 log.go:172] (0xc001c4ac60) (0xc0026c8640) Stream added, broadcasting: 1 I0130 21:11:43.129895 8 log.go:172] (0xc001c4ac60) Reply frame received for 1 I0130 21:11:43.129944 8 log.go:172] (0xc001c4ac60) (0xc00240cbe0) Create stream I0130 21:11:43.129955 8 log.go:172] (0xc001c4ac60) (0xc00240cbe0) Stream added, broadcasting: 3 I0130 21:11:43.130831 8 log.go:172] (0xc001c4ac60) Reply frame received for 3 I0130 21:11:43.130854 8 log.go:172] (0xc001c4ac60) (0xc00240cc80) Create stream I0130 21:11:43.130862 8 log.go:172] (0xc001c4ac60) (0xc00240cc80) Stream added, broadcasting: 5 I0130 21:11:43.131955 8 log.go:172] (0xc001c4ac60) Reply frame received for 5 I0130 21:11:43.182416 8 log.go:172] (0xc001c4ac60) Data frame received for 3 I0130 21:11:43.182669 8 log.go:172] (0xc00240cbe0) (3) Data frame handling I0130 21:11:43.182715 8 log.go:172] (0xc00240cbe0) (3) Data frame sent I0130 21:11:43.250280 8 log.go:172] (0xc001c4ac60) Data frame received for 1 I0130 21:11:43.250366 8 log.go:172] (0xc0026c8640) (1) Data frame handling I0130 21:11:43.250394 8 log.go:172] (0xc0026c8640) (1) Data frame sent I0130 21:11:43.250411 8 log.go:172] (0xc001c4ac60) (0xc0026c8640) Stream removed, broadcasting: 1 I0130 21:11:43.250723 8 log.go:172] (0xc001c4ac60) (0xc00240cc80) Stream removed, broadcasting: 5 I0130 21:11:43.250757 8 log.go:172] (0xc001c4ac60) (0xc00240cbe0) Stream removed, broadcasting: 3 I0130 21:11:43.250787 8 log.go:172] (0xc001c4ac60) (0xc0026c8640) Stream removed, broadcasting: 1 I0130 21:11:43.250795 8 log.go:172] (0xc001c4ac60) (0xc00240cbe0) Stream removed, broadcasting: 3 I0130 21:11:43.250804 8 log.go:172] (0xc001c4ac60) (0xc00240cc80) Stream removed, broadcasting: 5 Jan 30 21:11:43.251: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 30 21:11:43.251: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:43.251: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:43.252199 8 log.go:172] (0xc001c4ac60) Go away received I0130 21:11:43.284479 8 log.go:172] (0xc001c4b290) (0xc0026c8820) Create stream I0130 21:11:43.284591 8 log.go:172] (0xc001c4b290) (0xc0026c8820) Stream added, broadcasting: 1 I0130 21:11:43.288744 8 log.go:172] (0xc001c4b290) Reply frame received for 1 I0130 21:11:43.288787 8 log.go:172] (0xc001c4b290) (0xc0020601e0) Create stream I0130 21:11:43.288794 8 log.go:172] (0xc001c4b290) (0xc0020601e0) Stream added, broadcasting: 3 I0130 21:11:43.290095 8 log.go:172] (0xc001c4b290) Reply frame received for 3 I0130 21:11:43.290117 8 log.go:172] (0xc001c4b290) (0xc002939ea0) Create stream I0130 21:11:43.290126 8 log.go:172] (0xc001c4b290) (0xc002939ea0) Stream added, broadcasting: 5 I0130 21:11:43.291095 8 log.go:172] (0xc001c4b290) Reply frame received for 5 I0130 21:11:43.346234 8 log.go:172] (0xc001c4b290) Data frame received for 3 I0130 21:11:43.346476 8 log.go:172] (0xc0020601e0) (3) Data frame handling I0130 21:11:43.346532 8 log.go:172] (0xc0020601e0) (3) Data frame sent I0130 21:11:43.407255 8 log.go:172] (0xc001c4b290) Data frame received for 1 I0130 21:11:43.407456 8 log.go:172] (0xc001c4b290) (0xc0020601e0) Stream removed, broadcasting: 3 I0130 21:11:43.407625 8 log.go:172] (0xc0026c8820) (1) Data frame handling I0130 21:11:43.407654 8 log.go:172] (0xc0026c8820) (1) Data frame sent I0130 21:11:43.407669 8 log.go:172] (0xc001c4b290) (0xc0026c8820) Stream removed, broadcasting: 1 I0130 21:11:43.408098 8 log.go:172] (0xc001c4b290) (0xc002939ea0) Stream removed, broadcasting: 5 I0130 21:11:43.408349 8 log.go:172] (0xc001c4b290) Go away received I0130 21:11:43.408443 8 log.go:172] (0xc001c4b290) (0xc0026c8820) Stream removed, broadcasting: 1 I0130 21:11:43.408542 8 log.go:172] (0xc001c4b290) (0xc0020601e0) Stream removed, broadcasting: 3 I0130 21:11:43.408550 8 log.go:172] (0xc001c4b290) (0xc002939ea0) Stream removed, broadcasting: 5 Jan 30 21:11:43.408: INFO: Exec stderr: "" Jan 30 21:11:43.408: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:43.408: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:43.454040 8 log.go:172] (0xc001d4ae70) (0xc00244e000) Create stream I0130 21:11:43.454210 8 log.go:172] (0xc001d4ae70) (0xc00244e000) Stream added, broadcasting: 1 I0130 21:11:43.466529 8 log.go:172] (0xc001d4ae70) Reply frame received for 1 I0130 21:11:43.466821 8 log.go:172] (0xc001d4ae70) (0xc0026c8960) Create stream I0130 21:11:43.466846 8 log.go:172] (0xc001d4ae70) (0xc0026c8960) Stream added, broadcasting: 3 I0130 21:11:43.472098 8 log.go:172] (0xc001d4ae70) Reply frame received for 3 I0130 21:11:43.472127 8 log.go:172] (0xc001d4ae70) (0xc00240cd20) Create stream I0130 21:11:43.472136 8 log.go:172] (0xc001d4ae70) (0xc00240cd20) Stream added, broadcasting: 5 I0130 21:11:43.474616 8 log.go:172] (0xc001d4ae70) Reply frame received for 5 I0130 21:11:43.562275 8 log.go:172] (0xc001d4ae70) Data frame received for 3 I0130 21:11:43.562424 8 log.go:172] (0xc0026c8960) (3) Data frame handling I0130 21:11:43.562488 8 log.go:172] (0xc0026c8960) (3) Data frame sent I0130 21:11:43.629990 8 log.go:172] (0xc001d4ae70) Data frame received for 1 I0130 21:11:43.630346 8 log.go:172] (0xc001d4ae70) (0xc0026c8960) Stream removed, broadcasting: 3 I0130 21:11:43.630413 8 log.go:172] (0xc00244e000) (1) Data frame handling I0130 21:11:43.630447 8 log.go:172] (0xc00244e000) (1) Data frame sent I0130 21:11:43.630503 8 log.go:172] (0xc001d4ae70) (0xc00240cd20) Stream removed, broadcasting: 5 I0130 21:11:43.630572 8 log.go:172] (0xc001d4ae70) (0xc00244e000) Stream removed, broadcasting: 1 I0130 21:11:43.630615 8 log.go:172] (0xc001d4ae70) Go away received I0130 21:11:43.631310 8 log.go:172] (0xc001d4ae70) (0xc00244e000) Stream removed, broadcasting: 1 I0130 21:11:43.631453 8 log.go:172] (0xc001d4ae70) (0xc0026c8960) Stream removed, broadcasting: 3 I0130 21:11:43.631482 8 log.go:172] (0xc001d4ae70) (0xc00240cd20) Stream removed, broadcasting: 5 Jan 30 21:11:43.631: INFO: Exec stderr: "" Jan 30 21:11:43.631: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:43.631: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:43.671955 8 log.go:172] (0xc002088840) (0xc00240cdc0) Create stream I0130 21:11:43.672102 8 log.go:172] (0xc002088840) (0xc00240cdc0) Stream added, broadcasting: 1 I0130 21:11:43.675750 8 log.go:172] (0xc002088840) Reply frame received for 1 I0130 21:11:43.675895 8 log.go:172] (0xc002088840) (0xc00240ce60) Create stream I0130 21:11:43.675906 8 log.go:172] (0xc002088840) (0xc00240ce60) Stream added, broadcasting: 3 I0130 21:11:43.677164 8 log.go:172] (0xc002088840) Reply frame received for 3 I0130 21:11:43.677193 8 log.go:172] (0xc002088840) (0xc00240cf00) Create stream I0130 21:11:43.677204 8 log.go:172] (0xc002088840) (0xc00240cf00) Stream added, broadcasting: 5 I0130 21:11:43.678742 8 log.go:172] (0xc002088840) Reply frame received for 5 I0130 21:11:43.755247 8 log.go:172] (0xc002088840) Data frame received for 3 I0130 21:11:43.755505 8 log.go:172] (0xc00240ce60) (3) Data frame handling I0130 21:11:43.755540 8 log.go:172] (0xc00240ce60) (3) Data frame sent I0130 21:11:43.916714 8 log.go:172] (0xc002088840) Data frame received for 1 I0130 21:11:43.916937 8 log.go:172] (0xc002088840) (0xc00240cf00) Stream removed, broadcasting: 5 I0130 21:11:43.916997 8 log.go:172] (0xc00240cdc0) (1) Data frame handling I0130 21:11:43.917055 8 log.go:172] (0xc00240cdc0) (1) Data frame sent I0130 21:11:43.917063 8 log.go:172] (0xc002088840) (0xc00240ce60) Stream removed, broadcasting: 3 I0130 21:11:43.917107 8 log.go:172] (0xc002088840) (0xc00240cdc0) Stream removed, broadcasting: 1 I0130 21:11:43.917139 8 log.go:172] (0xc002088840) Go away received I0130 21:11:43.917505 8 log.go:172] (0xc002088840) (0xc00240cdc0) Stream removed, broadcasting: 1 I0130 21:11:43.917521 8 log.go:172] (0xc002088840) (0xc00240ce60) Stream removed, broadcasting: 3 I0130 21:11:43.917536 8 log.go:172] (0xc002088840) (0xc00240cf00) Stream removed, broadcasting: 5 Jan 30 21:11:43.917: INFO: Exec stderr: "" Jan 30 21:11:43.917: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5033 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 30 21:11:43.917: INFO: >>> kubeConfig: /root/.kube/config I0130 21:11:43.954920 8 log.go:172] (0xc001c4b970) (0xc0026c8b40) Create stream I0130 21:11:43.955201 8 log.go:172] (0xc001c4b970) (0xc0026c8b40) Stream added, broadcasting: 1 I0130 21:11:43.960223 8 log.go:172] (0xc001c4b970) Reply frame received for 1 I0130 21:11:43.960390 8 log.go:172] (0xc001c4b970) (0xc0023e2000) Create stream I0130 21:11:43.960402 8 log.go:172] (0xc001c4b970) (0xc0023e2000) Stream added, broadcasting: 3 I0130 21:11:43.961781 8 log.go:172] (0xc001c4b970) Reply frame received for 3 I0130 21:11:43.961803 8 log.go:172] (0xc001c4b970) (0xc00240cfa0) Create stream I0130 21:11:43.961813 8 log.go:172] (0xc001c4b970) (0xc00240cfa0) Stream added, broadcasting: 5 I0130 21:11:43.964039 8 log.go:172] (0xc001c4b970) Reply frame received for 5 I0130 21:11:44.049297 8 log.go:172] (0xc001c4b970) Data frame received for 3 I0130 21:11:44.049477 8 log.go:172] (0xc0023e2000) (3) Data frame handling I0130 21:11:44.049498 8 log.go:172] (0xc0023e2000) (3) Data frame sent I0130 21:11:44.143352 8 log.go:172] (0xc001c4b970) (0xc0023e2000) Stream removed, broadcasting: 3 I0130 21:11:44.143562 8 log.go:172] (0xc001c4b970) (0xc00240cfa0) Stream removed, broadcasting: 5 I0130 21:11:44.143669 8 log.go:172] (0xc001c4b970) Data frame received for 1 I0130 21:11:44.143771 8 log.go:172] (0xc0026c8b40) (1) Data frame handling I0130 21:11:44.143801 8 log.go:172] (0xc0026c8b40) (1) Data frame sent I0130 21:11:44.143839 8 log.go:172] (0xc001c4b970) (0xc0026c8b40) Stream removed, broadcasting: 1 I0130 21:11:44.143879 8 log.go:172] (0xc001c4b970) Go away received I0130 21:11:44.144550 8 log.go:172] (0xc001c4b970) (0xc0026c8b40) Stream removed, broadcasting: 1 I0130 21:11:44.144614 8 log.go:172] (0xc001c4b970) (0xc0023e2000) Stream removed, broadcasting: 3 I0130 21:11:44.144644 8 log.go:172] (0xc001c4b970) (0xc00240cfa0) Stream removed, broadcasting: 5 Jan 30 21:11:44.144: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:11:44.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5033" for this suite. • [SLOW TEST:22.225 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":213,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:11:44.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:11:44.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 30 21:11:44.398: INFO: stderr: "" Jan 30 21:11:44.398: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:11:44.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9397" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":13,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:11:44.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-2a180855-16fa-491b-bcb3-79746abe1691 STEP: Creating configMap with name cm-test-opt-upd-ffbd7ae0-f1ce-4551-887f-8646221260e0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2a180855-16fa-491b-bcb3-79746abe1691 STEP: Updating configmap cm-test-opt-upd-ffbd7ae0-f1ce-4551-887f-8646221260e0 STEP: Creating configMap with name cm-test-opt-create-3e4f6533-72a8-4528-9087-9d24010d5bfb STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:13:17.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7971" for this suite. • [SLOW TEST:93.222 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":239,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:13:17.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9866 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-9866 Jan 30 21:13:17.771: INFO: Found 0 stateful pods, waiting for 1 Jan 30 21:13:27.780: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 30 21:13:27.819: INFO: Deleting all statefulset in ns statefulset-9866 Jan 30 21:13:27.828: INFO: Scaling statefulset ss to 0 Jan 30 21:13:47.973: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 21:13:47.978: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:13:47.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9866" for this suite. • [SLOW TEST:30.382 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":15,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:13:48.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Jan 30 21:13:48.123: INFO: Waiting up to 5m0s for pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b" in namespace "var-expansion-7067" to be "success or failure" Jan 30 21:13:48.134: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.166443ms Jan 30 21:13:50.140: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017109626s Jan 30 21:13:52.145: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021718173s Jan 30 21:13:54.150: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027234368s Jan 30 21:13:56.163: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039523534s STEP: Saw pod success Jan 30 21:13:56.163: INFO: Pod "var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b" satisfied condition "success or failure" Jan 30 21:13:56.167: INFO: Trying to get logs from node jerma-node pod var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b container dapi-container: STEP: delete the pod Jan 30 21:13:56.230: INFO: Waiting for pod var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b to disappear Jan 30 21:13:56.235: INFO: Pod var-expansion-789c58e8-94bf-4310-a191-bcc9a70d168b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:13:56.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7067" for this suite. • [SLOW TEST:8.281 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":268,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:13:56.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0130 21:14:26.990823 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 21:14:26.990: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:14:26.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8263" for this suite. • [SLOW TEST:30.711 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":17,"skipped":279,"failed":0} [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:14:27.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8462/secret-test-e926ea75-7f60-406d-ba3e-7168e73b67ed STEP: Creating a pod to test consume secrets Jan 30 21:14:27.148: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4" in namespace "secrets-8462" to be "success or failure" Jan 30 21:14:27.164: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.339634ms Jan 30 21:14:29.171: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022537029s Jan 30 21:14:31.179: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030529864s Jan 30 21:14:34.893: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.745048259s Jan 30 21:14:36.904: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.755636133s STEP: Saw pod success Jan 30 21:14:36.904: INFO: Pod "pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4" satisfied condition "success or failure" Jan 30 21:14:36.912: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4 container env-test: STEP: delete the pod Jan 30 21:14:37.350: INFO: Waiting for pod pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4 to disappear Jan 30 21:14:37.386: INFO: Pod pod-configmaps-9c172e23-3a30-4fed-a4fa-865a4da4b0b4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:14:37.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8462" for this suite. • [SLOW TEST:10.571 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":279,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:14:37.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8493.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8493.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8493.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 30 21:14:49.957: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.965: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.970: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.973: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.990: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.995: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:49.998: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:50.003: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:50.012: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:14:55.018: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.021: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.024: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.026: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.038: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.041: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.044: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:14:55.056: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:15:00.037: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.045: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.049: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.053: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.087: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.094: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.099: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.102: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:00.110: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:15:05.018: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.023: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.030: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.037: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.052: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.055: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.058: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.060: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:05.066: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:15:10.021: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.026: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.030: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.035: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.052: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.059: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.063: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.069: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:10.076: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:15:15.022: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.027: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.033: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.038: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.061: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.068: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.074: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.077: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local from pod dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01: the server could not find the requested resource (get pods dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01) Jan 30 21:15:15.083: INFO: Lookups using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8493.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8493.svc.cluster.local jessie_udp@dns-test-service-2.dns-8493.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8493.svc.cluster.local] Jan 30 21:15:20.051: INFO: DNS probes using dns-8493/dns-test-3c850455-f9ea-4f80-ad34-612bdcd1cb01 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:15:20.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8493" for this suite. • [SLOW TEST:42.771 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":19,"skipped":288,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:15:20.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jan 30 21:15:29.006: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1540 pod-service-account-38140af0-0fdd-4c22-ba4e-89d41434de8f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 30 21:15:31.195: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1540 pod-service-account-38140af0-0fdd-4c22-ba4e-89d41434de8f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 30 21:15:31.576: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1540 pod-service-account-38140af0-0fdd-4c22-ba4e-89d41434de8f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:15:31.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1540" for this suite. • [SLOW TEST:11.622 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":20,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:15:31.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 30 21:15:32.161: INFO: Waiting up to 5m0s for pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8" in namespace "emptydir-7934" to be "success or failure" Jan 30 21:15:32.170: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.763696ms Jan 30 21:15:34.175: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014720967s Jan 30 21:15:36.184: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023091176s Jan 30 21:15:38.198: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037286914s Jan 30 21:15:40.205: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044191896s Jan 30 21:15:42.233: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072730928s STEP: Saw pod success Jan 30 21:15:42.233: INFO: Pod "pod-5462b1ba-4de4-402e-8310-04092e9ed0c8" satisfied condition "success or failure" Jan 30 21:15:42.243: INFO: Trying to get logs from node jerma-node pod pod-5462b1ba-4de4-402e-8310-04092e9ed0c8 container test-container: STEP: delete the pod Jan 30 21:15:42.266: INFO: Waiting for pod pod-5462b1ba-4de4-402e-8310-04092e9ed0c8 to disappear Jan 30 21:15:42.274: INFO: Pod pod-5462b1ba-4de4-402e-8310-04092e9ed0c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:15:42.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7934" for this suite. • [SLOW TEST:10.309 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":320,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:15:42.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:15:42.636: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"95017b7d-de0e-448f-aedc-9176ca80d1cd", Controller:(*bool)(0xc001e262ba), BlockOwnerDeletion:(*bool)(0xc001e262bb)}} Jan 30 21:15:42.649: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"61ec2b05-d052-43ea-86f5-1a1d234ade0e", Controller:(*bool)(0xc000b3f99a), BlockOwnerDeletion:(*bool)(0xc000b3f99b)}} Jan 30 21:15:42.661: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6057d180-2697-44c4-8c3b-3935f8601228", Controller:(*bool)(0xc000457f52), BlockOwnerDeletion:(*bool)(0xc000457f53)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:15:47.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4731" for this suite. • [SLOW TEST:5.457 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":22,"skipped":332,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:15:47.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 30 21:15:47.809: INFO: namespace kubectl-6365 Jan 30 21:15:47.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6365' Jan 30 21:15:48.459: INFO: stderr: "" Jan 30 21:15:48.460: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 30 21:15:49.467: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:49.468: INFO: Found 0 / 1 Jan 30 21:15:50.469: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:50.470: INFO: Found 0 / 1 Jan 30 21:15:51.474: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:51.474: INFO: Found 0 / 1 Jan 30 21:15:52.469: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:52.469: INFO: Found 0 / 1 Jan 30 21:15:53.467: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:53.467: INFO: Found 0 / 1 Jan 30 21:15:54.480: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:54.480: INFO: Found 0 / 1 Jan 30 21:15:55.480: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:55.480: INFO: Found 1 / 1 Jan 30 21:15:55.480: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 30 21:15:55.483: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:15:55.483: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 21:15:55.483: INFO: wait on agnhost-master startup in kubectl-6365 Jan 30 21:15:55.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-8jbl6 agnhost-master --namespace=kubectl-6365' Jan 30 21:15:55.668: INFO: stderr: "" Jan 30 21:15:55.668: INFO: stdout: "Paused\n" STEP: exposing RC Jan 30 21:15:55.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6365' Jan 30 21:15:55.807: INFO: stderr: "" Jan 30 21:15:55.807: INFO: stdout: "service/rm2 exposed\n" Jan 30 21:15:55.818: INFO: Service rm2 in namespace kubectl-6365 found. STEP: exposing service Jan 30 21:15:57.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6365' Jan 30 21:15:58.086: INFO: stderr: "" Jan 30 21:15:58.086: INFO: stdout: "service/rm3 exposed\n" Jan 30 21:15:58.097: INFO: Service rm3 in namespace kubectl-6365 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:16:00.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6365" for this suite. • [SLOW TEST:12.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":23,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:16:00.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 30 21:16:07.448: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:16:07.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6152" for this suite. • [SLOW TEST:7.428 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:16:07.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 30 21:16:07.697: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 30 21:16:21.298: INFO: >>> kubeConfig: /root/.kube/config Jan 30 21:16:24.400: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:16:37.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1308" for this suite. • [SLOW TEST:29.469 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":25,"skipped":389,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:16:37.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 STEP: creating an pod Jan 30 21:16:37.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6226 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 30 21:16:37.391: INFO: stderr: "" Jan 30 21:16:37.392: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jan 30 21:16:37.392: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 30 21:16:37.392: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6226" to be "running and ready, or succeeded" Jan 30 21:16:37.428: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 35.80578ms Jan 30 21:16:39.467: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075440085s Jan 30 21:16:42.211: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.819418829s Jan 30 21:16:44.220: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.827649744s Jan 30 21:16:44.220: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 30 21:16:44.220: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 30 21:16:44.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226' Jan 30 21:16:44.392: INFO: stderr: "" Jan 30 21:16:44.392: INFO: stdout: "I0130 21:16:42.524969 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/ljhx 308\nI0130 21:16:42.725344 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/96sq 395\nI0130 21:16:42.925569 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/8pt 327\nI0130 21:16:43.125157 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/k6mt 532\nI0130 21:16:43.325056 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/mgvn 590\nI0130 21:16:43.526113 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/44w 341\nI0130 21:16:43.725216 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/p56 418\nI0130 21:16:43.925071 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/fkw 515\nI0130 21:16:44.125086 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/d2r6 230\nI0130 21:16:44.325347 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/xwt 564\n" STEP: limiting log lines Jan 30 21:16:44.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226 --tail=1' Jan 30 21:16:44.537: INFO: stderr: "" Jan 30 21:16:44.537: INFO: stdout: "I0130 21:16:44.325347 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/xwt 564\n" Jan 30 21:16:44.537: INFO: got output "I0130 21:16:44.325347 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/xwt 564\n" STEP: limiting log bytes Jan 30 21:16:44.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226 --limit-bytes=1' Jan 30 21:16:44.656: INFO: stderr: "" Jan 30 21:16:44.656: INFO: stdout: "I" Jan 30 21:16:44.656: INFO: got output "I" STEP: exposing timestamps Jan 30 21:16:44.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226 --tail=1 --timestamps' Jan 30 21:16:44.761: INFO: stderr: "" Jan 30 21:16:44.761: INFO: stdout: "2020-01-30T21:16:44.726446666Z I0130 21:16:44.725347 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/mm57 567\n" Jan 30 21:16:44.761: INFO: got output "2020-01-30T21:16:44.726446666Z I0130 21:16:44.725347 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/mm57 567\n" STEP: restricting to a time range Jan 30 21:16:47.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226 --since=1s' Jan 30 21:16:47.446: INFO: stderr: "" Jan 30 21:16:47.447: INFO: stdout: "I0130 21:16:46.525233 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/h7nh 353\nI0130 21:16:46.725699 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/bdj 348\nI0130 21:16:46.925208 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/xc6 365\nI0130 21:16:47.125133 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/fhk 299\nI0130 21:16:47.325040 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/6rh 494\n" Jan 30 21:16:47.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6226 --since=24h' Jan 30 21:16:47.611: INFO: stderr: "" Jan 30 21:16:47.612: INFO: stdout: "I0130 21:16:42.524969 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/ljhx 308\nI0130 21:16:42.725344 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/96sq 395\nI0130 21:16:42.925569 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/8pt 327\nI0130 21:16:43.125157 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/k6mt 532\nI0130 21:16:43.325056 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/mgvn 590\nI0130 21:16:43.526113 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/44w 341\nI0130 21:16:43.725216 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/p56 418\nI0130 21:16:43.925071 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/fkw 515\nI0130 21:16:44.125086 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/d2r6 230\nI0130 21:16:44.325347 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/xwt 564\nI0130 21:16:44.525173 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/6xmd 567\nI0130 21:16:44.725347 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/mm57 567\nI0130 21:16:44.925071 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/9vkn 245\nI0130 21:16:45.125139 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/4hc9 419\nI0130 21:16:45.325566 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/j2xl 355\nI0130 21:16:45.525175 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/9w8 208\nI0130 21:16:45.725203 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/mg7 233\nI0130 21:16:45.925075 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/4f5d 349\nI0130 21:16:46.125063 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/9kx 341\nI0130 21:16:46.325049 1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/pcx 528\nI0130 21:16:46.525233 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/h7nh 353\nI0130 21:16:46.725699 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/bdj 348\nI0130 21:16:46.925208 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/xc6 365\nI0130 21:16:47.125133 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/fhk 299\nI0130 21:16:47.325040 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/6rh 494\nI0130 21:16:47.525024 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/gpcv 251\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 Jan 30 21:16:47.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6226' Jan 30 21:17:02.429: INFO: stderr: "" Jan 30 21:17:02.429: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:17:02.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6226" for this suite. • [SLOW TEST:25.413 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":26,"skipped":391,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:17:02.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:17:02.911: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 30 21:17:04.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:17:06.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:17:08.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015822, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:17:12.010: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:17:12.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2128" for this suite. STEP: Destroying namespace "webhook-2128-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.778 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":27,"skipped":400,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:17:12.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 30 21:17:12.514: INFO: Number of nodes with available pods: 0 Jan 30 21:17:12.514: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:14.756: INFO: Number of nodes with available pods: 0 Jan 30 21:17:14.756: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:15.549: INFO: Number of nodes with available pods: 0 Jan 30 21:17:15.549: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:16.537: INFO: Number of nodes with available pods: 0 Jan 30 21:17:16.537: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:17.526: INFO: Number of nodes with available pods: 0 Jan 30 21:17:17.527: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:19.290: INFO: Number of nodes with available pods: 0 Jan 30 21:17:19.290: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:19.559: INFO: Number of nodes with available pods: 0 Jan 30 21:17:19.559: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:20.535: INFO: Number of nodes with available pods: 0 Jan 30 21:17:20.535: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:21.548: INFO: Number of nodes with available pods: 0 Jan 30 21:17:21.548: INFO: Node jerma-node is running more than one daemon pod Jan 30 21:17:22.547: INFO: Number of nodes with available pods: 1 Jan 30 21:17:22.547: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:23.530: INFO: Number of nodes with available pods: 2 Jan 30 21:17:23.530: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 30 21:17:23.616: INFO: Number of nodes with available pods: 1 Jan 30 21:17:23.617: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:24.815: INFO: Number of nodes with available pods: 1 Jan 30 21:17:24.815: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:25.631: INFO: Number of nodes with available pods: 1 Jan 30 21:17:25.631: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:26.915: INFO: Number of nodes with available pods: 1 Jan 30 21:17:26.915: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:27.627: INFO: Number of nodes with available pods: 1 Jan 30 21:17:27.628: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:28.662: INFO: Number of nodes with available pods: 1 Jan 30 21:17:28.662: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:29.698: INFO: Number of nodes with available pods: 1 Jan 30 21:17:29.698: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:30.632: INFO: Number of nodes with available pods: 1 Jan 30 21:17:30.632: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 30 21:17:31.635: INFO: Number of nodes with available pods: 2 Jan 30 21:17:31.635: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-514, will wait for the garbage collector to delete the pods Jan 30 21:17:31.706: INFO: Deleting DaemonSet.extensions daemon-set took: 10.018617ms Jan 30 21:17:32.107: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.871405ms Jan 30 21:17:43.113: INFO: Number of nodes with available pods: 0 Jan 30 21:17:43.113: INFO: Number of running nodes: 0, number of available pods: 0 Jan 30 21:17:43.119: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-514/daemonsets","resourceVersion":"5366003"},"items":null} Jan 30 21:17:43.123: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-514/pods","resourceVersion":"5366003"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:17:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-514" for this suite. • [SLOW TEST:30.980 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":28,"skipped":401,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:17:43.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-aaf1b7c3-0c59-4161-8287-d88e29c33131 STEP: Creating a pod to test consume secrets Jan 30 21:17:43.361: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057" in namespace "projected-2245" to be "success or failure" Jan 30 21:17:43.365: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057": Phase="Pending", Reason="", readiness=false. Elapsed: 3.634464ms Jan 30 21:17:45.371: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010400797s Jan 30 21:17:47.417: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055686024s Jan 30 21:17:49.425: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06375206s Jan 30 21:17:51.434: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073115344s STEP: Saw pod success Jan 30 21:17:51.434: INFO: Pod "pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057" satisfied condition "success or failure" Jan 30 21:17:51.439: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057 container projected-secret-volume-test: STEP: delete the pod Jan 30 21:17:51.485: INFO: Waiting for pod pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057 to disappear Jan 30 21:17:51.515: INFO: Pod pod-projected-secrets-63fc79af-d525-4d33-931c-93c96a102057 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:17:51.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2245" for this suite. • [SLOW TEST:8.325 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":409,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:17:51.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:17:52.326: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:17:54.343: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:17:56.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:17:58.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716015872, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:18:01.379: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:18:02.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-933" for this suite. STEP: Destroying namespace "webhook-933-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.849 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":30,"skipped":430,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:18:02.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jan 30 21:18:02.445: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:18:02.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9115" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":31,"skipped":436,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:18:02.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-fc41779a-7cf8-46cb-8527-d27f9e8c090f in namespace container-probe-6571 Jan 30 21:18:10.863: INFO: Started pod busybox-fc41779a-7cf8-46cb-8527-d27f9e8c090f in namespace container-probe-6571 STEP: checking the pod's current state and verifying that restartCount is present Jan 30 21:18:10.869: INFO: Initial restart count of pod busybox-fc41779a-7cf8-46cb-8527-d27f9e8c090f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:22:12.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6571" for this suite. • [SLOW TEST:249.523 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:22:12.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-7220 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7220 STEP: Deleting pre-stop pod Jan 30 21:22:33.445: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:22:33.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7220" for this suite. • [SLOW TEST:21.303 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":33,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:22:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5880 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Jan 30 21:22:33.651: INFO: Found 0 stateful pods, waiting for 3 Jan 30 21:22:43.665: INFO: Found 2 stateful pods, waiting for 3 Jan 30 21:22:53.658: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 21:22:53.658: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 21:22:53.659: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 30 21:23:03.662: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 30 21:23:03.662: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 30 21:23:03.662: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 30 21:23:03.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5880 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 21:23:04.160: INFO: stderr: "I0130 21:23:03.864014 370 log.go:172] (0xc000994bb0) (0xc000a1c280) Create stream\nI0130 21:23:03.864395 370 log.go:172] (0xc000994bb0) (0xc000a1c280) Stream added, broadcasting: 1\nI0130 21:23:03.886996 370 log.go:172] (0xc000994bb0) Reply frame received for 1\nI0130 21:23:03.887109 370 log.go:172] (0xc000994bb0) (0xc0009ac1e0) Create stream\nI0130 21:23:03.887128 370 log.go:172] (0xc000994bb0) (0xc0009ac1e0) Stream added, broadcasting: 3\nI0130 21:23:03.888209 370 log.go:172] (0xc000994bb0) Reply frame received for 3\nI0130 21:23:03.888231 370 log.go:172] (0xc000994bb0) (0xc000a1c320) Create stream\nI0130 21:23:03.888239 370 log.go:172] (0xc000994bb0) (0xc000a1c320) Stream added, broadcasting: 5\nI0130 21:23:03.889131 370 log.go:172] (0xc000994bb0) Reply frame received for 5\nI0130 21:23:04.014154 370 log.go:172] (0xc000994bb0) Data frame received for 5\nI0130 21:23:04.014251 370 log.go:172] (0xc000a1c320) (5) Data frame handling\nI0130 21:23:04.014271 370 log.go:172] (0xc000a1c320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:23:04.038307 370 log.go:172] (0xc000994bb0) Data frame received for 3\nI0130 21:23:04.038344 370 log.go:172] (0xc0009ac1e0) (3) Data frame handling\nI0130 21:23:04.038356 370 log.go:172] (0xc0009ac1e0) (3) Data frame sent\nI0130 21:23:04.147510 370 log.go:172] (0xc000994bb0) Data frame received for 1\nI0130 21:23:04.147616 370 log.go:172] (0xc000994bb0) (0xc000a1c320) Stream removed, broadcasting: 5\nI0130 21:23:04.147645 370 log.go:172] (0xc000a1c280) (1) Data frame handling\nI0130 21:23:04.147654 370 log.go:172] (0xc000a1c280) (1) Data frame sent\nI0130 21:23:04.147666 370 log.go:172] (0xc000994bb0) (0xc0009ac1e0) Stream removed, broadcasting: 3\nI0130 21:23:04.147682 370 log.go:172] (0xc000994bb0) (0xc000a1c280) Stream removed, broadcasting: 1\nI0130 21:23:04.147688 370 log.go:172] (0xc000994bb0) Go away received\nI0130 21:23:04.148813 370 log.go:172] (0xc000994bb0) (0xc000a1c280) Stream removed, broadcasting: 1\nI0130 21:23:04.148830 370 log.go:172] (0xc000994bb0) (0xc0009ac1e0) Stream removed, broadcasting: 3\nI0130 21:23:04.148837 370 log.go:172] (0xc000994bb0) (0xc000a1c320) Stream removed, broadcasting: 5\n" Jan 30 21:23:04.160: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 21:23:04.160: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 30 21:23:14.204: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 30 21:23:24.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5880 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 21:23:24.646: INFO: stderr: "I0130 21:23:24.420060 388 log.go:172] (0xc000999ad0) (0xc0009ac6e0) Create stream\nI0130 21:23:24.420261 388 log.go:172] (0xc000999ad0) (0xc0009ac6e0) Stream added, broadcasting: 1\nI0130 21:23:24.422834 388 log.go:172] (0xc000999ad0) Reply frame received for 1\nI0130 21:23:24.422875 388 log.go:172] (0xc000999ad0) (0xc0005ae280) Create stream\nI0130 21:23:24.422885 388 log.go:172] (0xc000999ad0) (0xc0005ae280) Stream added, broadcasting: 3\nI0130 21:23:24.423987 388 log.go:172] (0xc000999ad0) Reply frame received for 3\nI0130 21:23:24.424014 388 log.go:172] (0xc000999ad0) (0xc0005ae320) Create stream\nI0130 21:23:24.424034 388 log.go:172] (0xc000999ad0) (0xc0005ae320) Stream added, broadcasting: 5\nI0130 21:23:24.424963 388 log.go:172] (0xc000999ad0) Reply frame received for 5\nI0130 21:23:24.522847 388 log.go:172] (0xc000999ad0) Data frame received for 5\nI0130 21:23:24.523187 388 log.go:172] (0xc0005ae320) (5) Data frame handling\nI0130 21:23:24.523259 388 log.go:172] (0xc0005ae320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:23:24.523467 388 log.go:172] (0xc000999ad0) Data frame received for 3\nI0130 21:23:24.523491 388 log.go:172] (0xc0005ae280) (3) Data frame handling\nI0130 21:23:24.523514 388 log.go:172] (0xc0005ae280) (3) Data frame sent\nI0130 21:23:24.634975 388 log.go:172] (0xc000999ad0) (0xc0005ae280) Stream removed, broadcasting: 3\nI0130 21:23:24.635135 388 log.go:172] (0xc000999ad0) Data frame received for 1\nI0130 21:23:24.635162 388 log.go:172] (0xc0009ac6e0) (1) Data frame handling\nI0130 21:23:24.635202 388 log.go:172] (0xc0009ac6e0) (1) Data frame sent\nI0130 21:23:24.635362 388 log.go:172] (0xc000999ad0) (0xc0009ac6e0) Stream removed, broadcasting: 1\nI0130 21:23:24.635473 388 log.go:172] (0xc000999ad0) (0xc0005ae320) Stream removed, broadcasting: 5\nI0130 21:23:24.635528 388 log.go:172] (0xc000999ad0) Go away received\nI0130 21:23:24.636487 388 log.go:172] (0xc000999ad0) (0xc0009ac6e0) Stream removed, broadcasting: 1\nI0130 21:23:24.636502 388 log.go:172] (0xc000999ad0) (0xc0005ae280) Stream removed, broadcasting: 3\nI0130 21:23:24.636507 388 log.go:172] (0xc000999ad0) (0xc0005ae320) Stream removed, broadcasting: 5\n" Jan 30 21:23:24.646: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 21:23:24.646: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 21:23:34.674: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:23:34.675: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 30 21:23:34.675: INFO: Waiting for Pod statefulset-5880/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 30 21:23:44.688: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:23:44.688: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 30 21:23:54.712: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update STEP: Rolling back to a previous revision Jan 30 21:24:04.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5880 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 30 21:24:05.184: INFO: stderr: "I0130 21:24:04.940401 403 log.go:172] (0xc0009b26e0) (0xc0006cfea0) Create stream\nI0130 21:24:04.940723 403 log.go:172] (0xc0009b26e0) (0xc0006cfea0) Stream added, broadcasting: 1\nI0130 21:24:04.944772 403 log.go:172] (0xc0009b26e0) Reply frame received for 1\nI0130 21:24:04.944871 403 log.go:172] (0xc0009b26e0) (0xc000648780) Create stream\nI0130 21:24:04.944894 403 log.go:172] (0xc0009b26e0) (0xc000648780) Stream added, broadcasting: 3\nI0130 21:24:04.946681 403 log.go:172] (0xc0009b26e0) Reply frame received for 3\nI0130 21:24:04.946703 403 log.go:172] (0xc0009b26e0) (0xc0006cff40) Create stream\nI0130 21:24:04.946714 403 log.go:172] (0xc0009b26e0) (0xc0006cff40) Stream added, broadcasting: 5\nI0130 21:24:04.948711 403 log.go:172] (0xc0009b26e0) Reply frame received for 5\nI0130 21:24:05.031513 403 log.go:172] (0xc0009b26e0) Data frame received for 5\nI0130 21:24:05.031589 403 log.go:172] (0xc0006cff40) (5) Data frame handling\nI0130 21:24:05.031614 403 log.go:172] (0xc0006cff40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:24:05.098953 403 log.go:172] (0xc0009b26e0) Data frame received for 3\nI0130 21:24:05.099018 403 log.go:172] (0xc000648780) (3) Data frame handling\nI0130 21:24:05.099079 403 log.go:172] (0xc000648780) (3) Data frame sent\nI0130 21:24:05.169266 403 log.go:172] (0xc0009b26e0) Data frame received for 1\nI0130 21:24:05.169444 403 log.go:172] (0xc0009b26e0) (0xc000648780) Stream removed, broadcasting: 3\nI0130 21:24:05.169525 403 log.go:172] (0xc0006cfea0) (1) Data frame handling\nI0130 21:24:05.169550 403 log.go:172] (0xc0006cfea0) (1) Data frame sent\nI0130 21:24:05.169561 403 log.go:172] (0xc0009b26e0) (0xc0006cfea0) Stream removed, broadcasting: 1\nI0130 21:24:05.172364 403 log.go:172] (0xc0009b26e0) (0xc0006cff40) Stream removed, broadcasting: 5\nI0130 21:24:05.172502 403 log.go:172] (0xc0009b26e0) Go away received\nI0130 21:24:05.173110 403 log.go:172] (0xc0009b26e0) (0xc0006cfea0) Stream removed, broadcasting: 1\nI0130 21:24:05.173137 403 log.go:172] (0xc0009b26e0) (0xc000648780) Stream removed, broadcasting: 3\nI0130 21:24:05.173149 403 log.go:172] (0xc0009b26e0) (0xc0006cff40) Stream removed, broadcasting: 5\n" Jan 30 21:24:05.185: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 30 21:24:05.185: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 30 21:24:15.227: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 30 21:24:25.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5880 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 30 21:24:25.651: INFO: stderr: "I0130 21:24:25.441064 424 log.go:172] (0xc0003e8dc0) (0xc000663b80) Create stream\nI0130 21:24:25.441403 424 log.go:172] (0xc0003e8dc0) (0xc000663b80) Stream added, broadcasting: 1\nI0130 21:24:25.446726 424 log.go:172] (0xc0003e8dc0) Reply frame received for 1\nI0130 21:24:25.446790 424 log.go:172] (0xc0003e8dc0) (0xc000978000) Create stream\nI0130 21:24:25.446801 424 log.go:172] (0xc0003e8dc0) (0xc000978000) Stream added, broadcasting: 3\nI0130 21:24:25.448672 424 log.go:172] (0xc0003e8dc0) Reply frame received for 3\nI0130 21:24:25.448692 424 log.go:172] (0xc0003e8dc0) (0xc000663d60) Create stream\nI0130 21:24:25.448698 424 log.go:172] (0xc0003e8dc0) (0xc000663d60) Stream added, broadcasting: 5\nI0130 21:24:25.450212 424 log.go:172] (0xc0003e8dc0) Reply frame received for 5\nI0130 21:24:25.555848 424 log.go:172] (0xc0003e8dc0) Data frame received for 3\nI0130 21:24:25.555912 424 log.go:172] (0xc000978000) (3) Data frame handling\nI0130 21:24:25.555937 424 log.go:172] (0xc000978000) (3) Data frame sent\nI0130 21:24:25.556257 424 log.go:172] (0xc0003e8dc0) Data frame received for 5\nI0130 21:24:25.556272 424 log.go:172] (0xc000663d60) (5) Data frame handling\nI0130 21:24:25.556285 424 log.go:172] (0xc000663d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:24:25.640799 424 log.go:172] (0xc0003e8dc0) Data frame received for 1\nI0130 21:24:25.640929 424 log.go:172] (0xc0003e8dc0) (0xc000978000) Stream removed, broadcasting: 3\nI0130 21:24:25.640978 424 log.go:172] (0xc000663b80) (1) Data frame handling\nI0130 21:24:25.641003 424 log.go:172] (0xc000663b80) (1) Data frame sent\nI0130 21:24:25.641035 424 log.go:172] (0xc0003e8dc0) (0xc000663d60) Stream removed, broadcasting: 5\nI0130 21:24:25.641059 424 log.go:172] (0xc0003e8dc0) (0xc000663b80) Stream removed, broadcasting: 1\nI0130 21:24:25.641078 424 log.go:172] (0xc0003e8dc0) Go away received\nI0130 21:24:25.642194 424 log.go:172] (0xc0003e8dc0) (0xc000663b80) Stream removed, broadcasting: 1\nI0130 21:24:25.642210 424 log.go:172] (0xc0003e8dc0) (0xc000978000) Stream removed, broadcasting: 3\nI0130 21:24:25.642218 424 log.go:172] (0xc0003e8dc0) (0xc000663d60) Stream removed, broadcasting: 5\n" Jan 30 21:24:25.652: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 30 21:24:25.652: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 30 21:24:35.696: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:24:35.696: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:24:35.696: INFO: Waiting for Pod statefulset-5880/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:24:45.708: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:24:45.708: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:24:45.708: INFO: Waiting for Pod statefulset-5880/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:24:55.707: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:24:55.707: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:25:05.779: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update Jan 30 21:25:05.779: INFO: Waiting for Pod statefulset-5880/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 30 21:25:15.714: INFO: Waiting for StatefulSet statefulset-5880/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 30 21:25:25.715: INFO: Deleting all statefulset in ns statefulset-5880 Jan 30 21:25:25.721: INFO: Scaling statefulset ss2 to 0 Jan 30 21:25:55.786: INFO: Waiting for statefulset status.replicas updated to 0 Jan 30 21:25:55.793: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:25:55.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5880" for this suite. • [SLOW TEST:202.397 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":34,"skipped":511,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:25:55.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-694b8920-865a-4c85-833c-0c1d85264efc STEP: Creating configMap with name cm-test-opt-upd-2626fc68-5335-4414-97c9-252f50767cdb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-694b8920-865a-4c85-833c-0c1d85264efc STEP: Updating configmap cm-test-opt-upd-2626fc68-5335-4414-97c9-252f50767cdb STEP: Creating configMap with name cm-test-opt-create-48b81e23-21a6-406b-8b33-8d64b74b4635 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:26:08.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8576" for this suite. • [SLOW TEST:12.446 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":581,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:26:08.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 30 21:26:08.453: INFO: Waiting up to 5m0s for pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d" in namespace "downward-api-5031" to be "success or failure" Jan 30 21:26:08.513: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.102544ms Jan 30 21:26:10.524: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07091819s Jan 30 21:26:12.534: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081469189s Jan 30 21:26:14.546: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093363141s Jan 30 21:26:16.557: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103978868s Jan 30 21:26:18.569: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11603153s STEP: Saw pod success Jan 30 21:26:18.569: INFO: Pod "downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d" satisfied condition "success or failure" Jan 30 21:26:18.576: INFO: Trying to get logs from node jerma-node pod downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d container dapi-container: STEP: delete the pod Jan 30 21:26:18.645: INFO: Waiting for pod downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d to disappear Jan 30 21:26:18.669: INFO: Pod downward-api-f329d4e5-7379-4bc7-be1c-7ad8d0c7f06d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:26:18.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5031" for this suite. • [SLOW TEST:10.341 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":593,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:26:18.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2c09459e-f34f-467b-baed-dff1b684b0b0 STEP: Creating a pod to test consume configMaps Jan 30 21:26:18.858: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c" in namespace "projected-1895" to be "success or failure" Jan 30 21:26:18.896: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.385633ms Jan 30 21:26:20.907: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048801899s Jan 30 21:26:22.916: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057929088s Jan 30 21:26:24.922: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064336104s Jan 30 21:26:26.932: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073379138s STEP: Saw pod success Jan 30 21:26:26.932: INFO: Pod "pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c" satisfied condition "success or failure" Jan 30 21:26:26.937: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c container projected-configmap-volume-test: STEP: delete the pod Jan 30 21:26:27.021: INFO: Waiting for pod pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c to disappear Jan 30 21:26:27.029: INFO: Pod pod-projected-configmaps-81897fff-8eee-4054-8958-9324c9d0c52c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:26:27.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1895" for this suite. • [SLOW TEST:8.361 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":605,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:26:27.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:26:43.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3137" for this suite. • [SLOW TEST:16.796 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":38,"skipped":610,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:26:43.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:27:14.199: INFO: Container started at 2020-01-30 21:26:50 +0000 UTC, pod became ready at 2020-01-30 21:27:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:27:14.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8528" for this suite. • [SLOW TEST:30.366 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:27:14.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jan 30 21:27:14.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3213' Jan 30 21:27:16.716: INFO: stderr: "" Jan 30 21:27:16.716: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 30 21:27:17.725: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:17.725: INFO: Found 0 / 1 Jan 30 21:27:18.722: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:18.723: INFO: Found 0 / 1 Jan 30 21:27:19.744: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:19.745: INFO: Found 0 / 1 Jan 30 21:27:20.724: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:20.724: INFO: Found 0 / 1 Jan 30 21:27:21.723: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:21.723: INFO: Found 0 / 1 Jan 30 21:27:22.721: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:22.721: INFO: Found 0 / 1 Jan 30 21:27:23.723: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:23.723: INFO: Found 1 / 1 Jan 30 21:27:23.723: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 30 21:27:23.728: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:23.728: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 30 21:27:23.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-2qb94 --namespace=kubectl-3213 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 30 21:27:23.909: INFO: stderr: "" Jan 30 21:27:23.909: INFO: stdout: "pod/agnhost-master-2qb94 patched\n" STEP: checking annotations Jan 30 21:27:23.918: INFO: Selector matched 1 pods for map[app:agnhost] Jan 30 21:27:23.918: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:27:23.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3213" for this suite. • [SLOW TEST:9.742 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":40,"skipped":657,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:27:23.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 30 21:27:24.042: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jan 30 21:27:24.830: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 30 21:27:26.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:27:28.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:27:30.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016444, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:27:34.271: INFO: Waited 1.266783052s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:27:34.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4260" for this suite. • [SLOW TEST:11.090 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":41,"skipped":662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:27:35.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:27:45.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6188" for this suite. • [SLOW TEST:10.219 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":695,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:27:45.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jan 30 21:27:45.414: INFO: Waiting up to 5m0s for pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e" in namespace "downward-api-2429" to be "success or failure" Jan 30 21:27:45.465: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 50.72706ms Jan 30 21:27:47.471: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056340152s Jan 30 21:27:49.477: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062302298s Jan 30 21:27:51.486: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072080115s Jan 30 21:27:53.491: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07728982s STEP: Saw pod success Jan 30 21:27:53.492: INFO: Pod "downward-api-97a9f550-0115-40f8-8166-b9f646087a4e" satisfied condition "success or failure" Jan 30 21:27:53.495: INFO: Trying to get logs from node jerma-node pod downward-api-97a9f550-0115-40f8-8166-b9f646087a4e container dapi-container: STEP: delete the pod Jan 30 21:27:53.573: INFO: Waiting for pod downward-api-97a9f550-0115-40f8-8166-b9f646087a4e to disappear Jan 30 21:27:53.582: INFO: Pod downward-api-97a9f550-0115-40f8-8166-b9f646087a4e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:27:53.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2429" for this suite. • [SLOW TEST:8.334 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":700,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:27:53.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:27:53.721: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 30 21:27:56.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4211 create -f -' Jan 30 21:27:59.340: INFO: stderr: "" Jan 30 21:27:59.340: INFO: stdout: "e2e-test-crd-publish-openapi-7174-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 30 21:27:59.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4211 delete e2e-test-crd-publish-openapi-7174-crds test-cr' Jan 30 21:27:59.547: INFO: stderr: "" Jan 30 21:27:59.547: INFO: stdout: "e2e-test-crd-publish-openapi-7174-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 30 21:27:59.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4211 apply -f -' Jan 30 21:28:00.022: INFO: stderr: "" Jan 30 21:28:00.022: INFO: stdout: "e2e-test-crd-publish-openapi-7174-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 30 21:28:00.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4211 delete e2e-test-crd-publish-openapi-7174-crds test-cr' Jan 30 21:28:00.210: INFO: stderr: "" Jan 30 21:28:00.211: INFO: stdout: "e2e-test-crd-publish-openapi-7174-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 30 21:28:00.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7174-crds' Jan 30 21:28:00.622: INFO: stderr: "" Jan 30 21:28:00.622: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7174-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:28:04.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4211" for this suite. • [SLOW TEST:10.656 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":44,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:28:04.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c842b296-4372-47c8-a2b5-2317c656475a STEP: Creating a pod to test consume secrets Jan 30 21:28:04.367: INFO: Waiting up to 5m0s for pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb" in namespace "secrets-4974" to be "success or failure" Jan 30 21:28:04.376: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.021912ms Jan 30 21:28:06.382: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014475924s Jan 30 21:28:08.388: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021271093s Jan 30 21:28:10.400: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03317923s Jan 30 21:28:12.405: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037742056s STEP: Saw pod success Jan 30 21:28:12.405: INFO: Pod "pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb" satisfied condition "success or failure" Jan 30 21:28:12.409: INFO: Trying to get logs from node jerma-node pod pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb container secret-env-test: STEP: delete the pod Jan 30 21:28:12.444: INFO: Waiting for pod pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb to disappear Jan 30 21:28:12.451: INFO: Pod pod-secrets-64113045-c1f5-4c5f-94b9-49e33e2db0cb no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:28:12.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4974" for this suite. • [SLOW TEST:8.240 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":723,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:28:12.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:28:12.567: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260" in namespace "downward-api-4328" to be "success or failure" Jan 30 21:28:12.643: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260": Phase="Pending", Reason="", readiness=false. Elapsed: 75.454204ms Jan 30 21:28:14.653: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085698606s Jan 30 21:28:16.663: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095438674s Jan 30 21:28:18.673: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105783766s Jan 30 21:28:20.679: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111314568s STEP: Saw pod success Jan 30 21:28:20.679: INFO: Pod "downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260" satisfied condition "success or failure" Jan 30 21:28:20.682: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260 container client-container: STEP: delete the pod Jan 30 21:28:20.740: INFO: Waiting for pod downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260 to disappear Jan 30 21:28:20.747: INFO: Pod downwardapi-volume-2964ab18-a6ea-498b-9012-b00524722260 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:28:20.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4328" for this suite. • [SLOW TEST:8.268 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:28:20.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:28:21.707: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:28:23.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:28:25.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:28:27.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:28:29.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016501, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:28:32.770: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:28:32.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1560" for this suite. STEP: Destroying namespace "webhook-1560-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.281 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":47,"skipped":746,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:28:33.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-2cce663b-63df-49e1-af0c-6400be0bc603 STEP: Creating secret with name s-test-opt-upd-3b3b11af-7e13-472d-b554-926be5336250 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2cce663b-63df-49e1-af0c-6400be0bc603 STEP: Updating secret s-test-opt-upd-3b3b11af-7e13-472d-b554-926be5336250 STEP: Creating secret with name s-test-opt-create-5edb863a-0285-4c87-8001-518d79011049 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:28:47.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8588" for this suite. • [SLOW TEST:14.415 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:28:47.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:28:47.507: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1976 I0130 21:28:47.593034 8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1976, replica count: 1 I0130 21:28:48.643617 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:49.644022 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:50.645666 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:51.646258 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:52.646938 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:53.650479 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:54.651573 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:28:55.652411 8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 30 21:28:55.791: INFO: Created: latency-svc-4pfbm Jan 30 21:28:55.803: INFO: Got endpoints: latency-svc-4pfbm [51.126574ms] Jan 30 21:28:55.871: INFO: Created: latency-svc-j6nxj Jan 30 21:28:55.903: INFO: Got endpoints: latency-svc-j6nxj [99.234151ms] Jan 30 21:28:55.908: INFO: Created: latency-svc-764gn Jan 30 21:28:55.928: INFO: Got endpoints: latency-svc-764gn [123.194613ms] Jan 30 21:28:55.947: INFO: Created: latency-svc-cbbwb Jan 30 21:28:55.956: INFO: Got endpoints: latency-svc-cbbwb [150.443607ms] Jan 30 21:28:56.052: INFO: Created: latency-svc-45rqs Jan 30 21:28:56.080: INFO: Created: latency-svc-ht57w Jan 30 21:28:56.088: INFO: Got endpoints: latency-svc-45rqs [284.314948ms] Jan 30 21:28:56.109: INFO: Got endpoints: latency-svc-ht57w [303.031334ms] Jan 30 21:28:56.136: INFO: Created: latency-svc-qbxrz Jan 30 21:28:56.220: INFO: Got endpoints: latency-svc-qbxrz [415.956561ms] Jan 30 21:28:56.223: INFO: Created: latency-svc-psprv Jan 30 21:28:56.251: INFO: Created: latency-svc-4gtgw Jan 30 21:28:56.251: INFO: Got endpoints: latency-svc-psprv [446.402252ms] Jan 30 21:28:56.259: INFO: Got endpoints: latency-svc-4gtgw [453.345019ms] Jan 30 21:28:56.291: INFO: Created: latency-svc-7jcmw Jan 30 21:28:56.293: INFO: Got endpoints: latency-svc-7jcmw [488.12657ms] Jan 30 21:28:56.421: INFO: Created: latency-svc-4vdt6 Jan 30 21:28:56.462: INFO: Created: latency-svc-nhvgm Jan 30 21:28:56.471: INFO: Got endpoints: latency-svc-4vdt6 [666.415869ms] Jan 30 21:28:56.496: INFO: Got endpoints: latency-svc-nhvgm [690.102584ms] Jan 30 21:28:56.524: INFO: Created: latency-svc-zvmgx Jan 30 21:28:56.591: INFO: Got endpoints: latency-svc-zvmgx [785.380908ms] Jan 30 21:28:56.643: INFO: Created: latency-svc-76m7z Jan 30 21:28:56.648: INFO: Got endpoints: latency-svc-76m7z [842.123203ms] Jan 30 21:28:56.669: INFO: Created: latency-svc-9grq2 Jan 30 21:28:56.683: INFO: Got endpoints: latency-svc-9grq2 [877.986759ms] Jan 30 21:28:56.684: INFO: Created: latency-svc-dzw2f Jan 30 21:28:56.736: INFO: Got endpoints: latency-svc-dzw2f [930.797318ms] Jan 30 21:28:56.756: INFO: Created: latency-svc-qgt55 Jan 30 21:28:56.763: INFO: Got endpoints: latency-svc-qgt55 [859.347087ms] Jan 30 21:28:56.955: INFO: Created: latency-svc-gbnhn Jan 30 21:28:57.003: INFO: Got endpoints: latency-svc-gbnhn [1.074495106s] Jan 30 21:28:57.197: INFO: Created: latency-svc-p8zkt Jan 30 21:28:57.249: INFO: Created: latency-svc-hk9v2 Jan 30 21:28:57.253: INFO: Got endpoints: latency-svc-p8zkt [1.297192752s] Jan 30 21:28:57.260: INFO: Got endpoints: latency-svc-hk9v2 [1.172143508s] Jan 30 21:28:57.444: INFO: Created: latency-svc-64kvz Jan 30 21:28:57.466: INFO: Got endpoints: latency-svc-64kvz [1.357276021s] Jan 30 21:28:57.533: INFO: Created: latency-svc-4zf82 Jan 30 21:28:57.665: INFO: Got endpoints: latency-svc-4zf82 [1.444309835s] Jan 30 21:28:57.696: INFO: Created: latency-svc-gkcqx Jan 30 21:28:57.699: INFO: Got endpoints: latency-svc-gkcqx [1.447193243s] Jan 30 21:28:57.745: INFO: Created: latency-svc-94xl4 Jan 30 21:28:57.753: INFO: Got endpoints: latency-svc-94xl4 [1.494519621s] Jan 30 21:28:57.826: INFO: Created: latency-svc-s7ftw Jan 30 21:28:57.834: INFO: Got endpoints: latency-svc-s7ftw [1.541053001s] Jan 30 21:28:57.866: INFO: Created: latency-svc-fwsht Jan 30 21:28:57.873: INFO: Got endpoints: latency-svc-fwsht [1.400883247s] Jan 30 21:28:57.896: INFO: Created: latency-svc-6xlf7 Jan 30 21:28:57.901: INFO: Got endpoints: latency-svc-6xlf7 [1.405640386s] Jan 30 21:28:57.974: INFO: Created: latency-svc-2pjvm Jan 30 21:28:57.986: INFO: Got endpoints: latency-svc-2pjvm [1.394611997s] Jan 30 21:28:58.012: INFO: Created: latency-svc-p6xhj Jan 30 21:28:58.020: INFO: Got endpoints: latency-svc-p6xhj [1.372368237s] Jan 30 21:28:58.040: INFO: Created: latency-svc-kgjdn Jan 30 21:28:58.049: INFO: Got endpoints: latency-svc-kgjdn [1.36594417s] Jan 30 21:28:58.111: INFO: Created: latency-svc-6b5gb Jan 30 21:28:58.132: INFO: Got endpoints: latency-svc-6b5gb [1.395268988s] Jan 30 21:28:58.136: INFO: Created: latency-svc-nmrdt Jan 30 21:28:58.139: INFO: Got endpoints: latency-svc-nmrdt [1.375855306s] Jan 30 21:28:58.165: INFO: Created: latency-svc-wdhp9 Jan 30 21:28:58.170: INFO: Got endpoints: latency-svc-wdhp9 [1.167357241s] Jan 30 21:28:58.195: INFO: Created: latency-svc-vzjtl Jan 30 21:28:58.198: INFO: Got endpoints: latency-svc-vzjtl [944.770521ms] Jan 30 21:28:58.337: INFO: Created: latency-svc-w99mp Jan 30 21:28:58.345: INFO: Got endpoints: latency-svc-w99mp [1.084028806s] Jan 30 21:28:58.362: INFO: Created: latency-svc-v4qbq Jan 30 21:28:58.371: INFO: Got endpoints: latency-svc-v4qbq [903.877892ms] Jan 30 21:28:58.393: INFO: Created: latency-svc-cqk85 Jan 30 21:28:58.397: INFO: Got endpoints: latency-svc-cqk85 [732.297067ms] Jan 30 21:28:58.476: INFO: Created: latency-svc-npm7s Jan 30 21:28:58.517: INFO: Got endpoints: latency-svc-npm7s [818.628418ms] Jan 30 21:28:58.523: INFO: Created: latency-svc-f7s95 Jan 30 21:28:58.531: INFO: Got endpoints: latency-svc-f7s95 [777.547312ms] Jan 30 21:28:58.637: INFO: Created: latency-svc-l4nxb Jan 30 21:28:58.638: INFO: Got endpoints: latency-svc-l4nxb [803.408388ms] Jan 30 21:28:58.695: INFO: Created: latency-svc-7xp68 Jan 30 21:28:58.703: INFO: Got endpoints: latency-svc-7xp68 [830.36425ms] Jan 30 21:28:58.726: INFO: Created: latency-svc-gr9r9 Jan 30 21:28:58.781: INFO: Got endpoints: latency-svc-gr9r9 [879.438929ms] Jan 30 21:28:58.789: INFO: Created: latency-svc-rc56s Jan 30 21:28:58.819: INFO: Got endpoints: latency-svc-rc56s [833.293465ms] Jan 30 21:28:58.861: INFO: Created: latency-svc-sqkkk Jan 30 21:28:58.958: INFO: Got endpoints: latency-svc-sqkkk [937.148388ms] Jan 30 21:28:58.965: INFO: Created: latency-svc-hd2wr Jan 30 21:28:58.965: INFO: Got endpoints: latency-svc-hd2wr [915.816569ms] Jan 30 21:28:59.024: INFO: Created: latency-svc-hdxtt Jan 30 21:28:59.031: INFO: Got endpoints: latency-svc-hdxtt [898.815396ms] Jan 30 21:28:59.179: INFO: Created: latency-svc-p7hfj Jan 30 21:28:59.190: INFO: Got endpoints: latency-svc-p7hfj [1.051488077s] Jan 30 21:28:59.245: INFO: Created: latency-svc-mld9q Jan 30 21:28:59.250: INFO: Got endpoints: latency-svc-mld9q [1.079648458s] Jan 30 21:28:59.349: INFO: Created: latency-svc-wcjjg Jan 30 21:28:59.353: INFO: Got endpoints: latency-svc-wcjjg [1.155339684s] Jan 30 21:28:59.399: INFO: Created: latency-svc-gzvx9 Jan 30 21:28:59.418: INFO: Got endpoints: latency-svc-gzvx9 [1.072797944s] Jan 30 21:28:59.534: INFO: Created: latency-svc-njsgs Jan 30 21:28:59.554: INFO: Got endpoints: latency-svc-njsgs [1.183516067s] Jan 30 21:28:59.609: INFO: Created: latency-svc-p8zqb Jan 30 21:28:59.742: INFO: Got endpoints: latency-svc-p8zqb [1.344235842s] Jan 30 21:28:59.835: INFO: Created: latency-svc-2lnhb Jan 30 21:28:59.903: INFO: Got endpoints: latency-svc-2lnhb [1.384803107s] Jan 30 21:28:59.940: INFO: Created: latency-svc-xfsrc Jan 30 21:28:59.970: INFO: Got endpoints: latency-svc-xfsrc [1.439388729s] Jan 30 21:29:00.004: INFO: Created: latency-svc-jgj69 Jan 30 21:29:00.048: INFO: Got endpoints: latency-svc-jgj69 [1.410386157s] Jan 30 21:29:00.072: INFO: Created: latency-svc-rkzbt Jan 30 21:29:00.081: INFO: Got endpoints: latency-svc-rkzbt [1.377468245s] Jan 30 21:29:00.107: INFO: Created: latency-svc-wwhzr Jan 30 21:29:00.135: INFO: Got endpoints: latency-svc-wwhzr [1.353569292s] Jan 30 21:29:00.190: INFO: Created: latency-svc-j5bfj Jan 30 21:29:00.194: INFO: Got endpoints: latency-svc-j5bfj [1.374239834s] Jan 30 21:29:00.218: INFO: Created: latency-svc-5s2d2 Jan 30 21:29:00.223: INFO: Got endpoints: latency-svc-5s2d2 [1.264534684s] Jan 30 21:29:00.271: INFO: Created: latency-svc-zsq7l Jan 30 21:29:00.350: INFO: Got endpoints: latency-svc-zsq7l [1.385250857s] Jan 30 21:29:00.375: INFO: Created: latency-svc-c6w7n Jan 30 21:29:00.385: INFO: Got endpoints: latency-svc-c6w7n [1.35387385s] Jan 30 21:29:00.415: INFO: Created: latency-svc-r95rc Jan 30 21:29:00.425: INFO: Got endpoints: latency-svc-r95rc [1.234562678s] Jan 30 21:29:00.445: INFO: Created: latency-svc-cgb6h Jan 30 21:29:00.501: INFO: Got endpoints: latency-svc-cgb6h [1.250606741s] Jan 30 21:29:00.511: INFO: Created: latency-svc-t2p26 Jan 30 21:29:00.519: INFO: Got endpoints: latency-svc-t2p26 [1.164960783s] Jan 30 21:29:00.538: INFO: Created: latency-svc-6fnp8 Jan 30 21:29:00.546: INFO: Got endpoints: latency-svc-6fnp8 [1.128486664s] Jan 30 21:29:00.563: INFO: Created: latency-svc-rkmzp Jan 30 21:29:00.570: INFO: Got endpoints: latency-svc-rkmzp [1.015187505s] Jan 30 21:29:00.646: INFO: Created: latency-svc-r58xd Jan 30 21:29:00.673: INFO: Got endpoints: latency-svc-r58xd [931.388372ms] Jan 30 21:29:00.675: INFO: Created: latency-svc-h5rzn Jan 30 21:29:00.681: INFO: Got endpoints: latency-svc-h5rzn [778.750398ms] Jan 30 21:29:00.712: INFO: Created: latency-svc-sptdf Jan 30 21:29:00.716: INFO: Got endpoints: latency-svc-sptdf [745.866192ms] Jan 30 21:29:00.828: INFO: Created: latency-svc-ztm42 Jan 30 21:29:00.864: INFO: Got endpoints: latency-svc-ztm42 [815.844838ms] Jan 30 21:29:00.874: INFO: Created: latency-svc-fxhvh Jan 30 21:29:00.878: INFO: Got endpoints: latency-svc-fxhvh [796.844866ms] Jan 30 21:29:00.912: INFO: Created: latency-svc-n8dn4 Jan 30 21:29:00.913: INFO: Got endpoints: latency-svc-n8dn4 [778.186408ms] Jan 30 21:29:00.988: INFO: Created: latency-svc-v4hn5 Jan 30 21:29:00.995: INFO: Got endpoints: latency-svc-v4hn5 [800.754446ms] Jan 30 21:29:01.021: INFO: Created: latency-svc-t2g2s Jan 30 21:29:01.027: INFO: Got endpoints: latency-svc-t2g2s [804.318289ms] Jan 30 21:29:01.074: INFO: Created: latency-svc-rgpnz Jan 30 21:29:01.169: INFO: Got endpoints: latency-svc-rgpnz [818.233239ms] Jan 30 21:29:01.182: INFO: Created: latency-svc-sc4pq Jan 30 21:29:01.192: INFO: Got endpoints: latency-svc-sc4pq [806.619651ms] Jan 30 21:29:01.207: INFO: Created: latency-svc-zwb75 Jan 30 21:29:01.233: INFO: Got endpoints: latency-svc-zwb75 [807.73752ms] Jan 30 21:29:01.240: INFO: Created: latency-svc-6btxw Jan 30 21:29:01.263: INFO: Got endpoints: latency-svc-6btxw [761.242685ms] Jan 30 21:29:01.336: INFO: Created: latency-svc-xc99q Jan 30 21:29:01.345: INFO: Got endpoints: latency-svc-xc99q [826.111737ms] Jan 30 21:29:01.367: INFO: Created: latency-svc-fbt4d Jan 30 21:29:01.376: INFO: Got endpoints: latency-svc-fbt4d [829.83162ms] Jan 30 21:29:01.414: INFO: Created: latency-svc-z45d9 Jan 30 21:29:01.472: INFO: Got endpoints: latency-svc-z45d9 [901.865782ms] Jan 30 21:29:01.502: INFO: Created: latency-svc-89zzj Jan 30 21:29:01.505: INFO: Got endpoints: latency-svc-89zzj [831.105323ms] Jan 30 21:29:01.526: INFO: Created: latency-svc-rhjbg Jan 30 21:29:01.531: INFO: Got endpoints: latency-svc-rhjbg [849.64043ms] Jan 30 21:29:01.557: INFO: Created: latency-svc-zjhmb Jan 30 21:29:01.616: INFO: Got endpoints: latency-svc-zjhmb [899.053371ms] Jan 30 21:29:01.624: INFO: Created: latency-svc-hqvz2 Jan 30 21:29:01.636: INFO: Got endpoints: latency-svc-hqvz2 [771.300167ms] Jan 30 21:29:01.667: INFO: Created: latency-svc-h42jn Jan 30 21:29:01.673: INFO: Got endpoints: latency-svc-h42jn [795.25128ms] Jan 30 21:29:01.717: INFO: Created: latency-svc-5cmh2 Jan 30 21:29:01.771: INFO: Got endpoints: latency-svc-5cmh2 [857.998522ms] Jan 30 21:29:01.777: INFO: Created: latency-svc-mzn68 Jan 30 21:29:01.787: INFO: Got endpoints: latency-svc-mzn68 [792.215019ms] Jan 30 21:29:01.942: INFO: Created: latency-svc-blth7 Jan 30 21:29:01.974: INFO: Created: latency-svc-qptmw Jan 30 21:29:01.975: INFO: Got endpoints: latency-svc-blth7 [947.861174ms] Jan 30 21:29:02.000: INFO: Got endpoints: latency-svc-qptmw [831.269231ms] Jan 30 21:29:02.021: INFO: Created: latency-svc-x5pgq Jan 30 21:29:02.082: INFO: Created: latency-svc-6vlhv Jan 30 21:29:02.087: INFO: Got endpoints: latency-svc-x5pgq [895.048167ms] Jan 30 21:29:02.115: INFO: Got endpoints: latency-svc-6vlhv [881.899177ms] Jan 30 21:29:02.118: INFO: Created: latency-svc-849mj Jan 30 21:29:02.124: INFO: Got endpoints: latency-svc-849mj [861.44605ms] Jan 30 21:29:02.154: INFO: Created: latency-svc-ndc7c Jan 30 21:29:02.215: INFO: Got endpoints: latency-svc-ndc7c [869.635348ms] Jan 30 21:29:02.219: INFO: Created: latency-svc-lfzcp Jan 30 21:29:02.220: INFO: Got endpoints: latency-svc-lfzcp [843.063669ms] Jan 30 21:29:02.273: INFO: Created: latency-svc-26wc7 Jan 30 21:29:02.286: INFO: Got endpoints: latency-svc-26wc7 [813.672692ms] Jan 30 21:29:02.357: INFO: Created: latency-svc-5j5gj Jan 30 21:29:02.357: INFO: Got endpoints: latency-svc-5j5gj [852.625512ms] Jan 30 21:29:02.384: INFO: Created: latency-svc-q85q4 Jan 30 21:29:02.389: INFO: Got endpoints: latency-svc-q85q4 [857.475914ms] Jan 30 21:29:02.416: INFO: Created: latency-svc-b2p8k Jan 30 21:29:02.424: INFO: Got endpoints: latency-svc-b2p8k [808.493447ms] Jan 30 21:29:02.508: INFO: Created: latency-svc-nw94p Jan 30 21:29:02.511: INFO: Got endpoints: latency-svc-nw94p [875.393288ms] Jan 30 21:29:02.560: INFO: Created: latency-svc-mjvcf Jan 30 21:29:02.668: INFO: Got endpoints: latency-svc-mjvcf [994.695428ms] Jan 30 21:29:02.679: INFO: Created: latency-svc-842gl Jan 30 21:29:02.686: INFO: Got endpoints: latency-svc-842gl [914.591706ms] Jan 30 21:29:02.717: INFO: Created: latency-svc-6c494 Jan 30 21:29:02.732: INFO: Got endpoints: latency-svc-6c494 [944.560918ms] Jan 30 21:29:02.754: INFO: Created: latency-svc-cb2cl Jan 30 21:29:02.842: INFO: Got endpoints: latency-svc-cb2cl [867.02701ms] Jan 30 21:29:02.895: INFO: Created: latency-svc-2xt8r Jan 30 21:29:02.895: INFO: Got endpoints: latency-svc-2xt8r [895.049998ms] Jan 30 21:29:02.936: INFO: Created: latency-svc-9w5np Jan 30 21:29:02.962: INFO: Got endpoints: latency-svc-9w5np [875.417949ms] Jan 30 21:29:02.988: INFO: Created: latency-svc-xl4bg Jan 30 21:29:02.996: INFO: Got endpoints: latency-svc-xl4bg [880.767166ms] Jan 30 21:29:03.019: INFO: Created: latency-svc-xhxbg Jan 30 21:29:03.040: INFO: Got endpoints: latency-svc-xhxbg [915.845756ms] Jan 30 21:29:03.068: INFO: Created: latency-svc-bsvnc Jan 30 21:29:03.134: INFO: Got endpoints: latency-svc-bsvnc [918.829128ms] Jan 30 21:29:03.197: INFO: Created: latency-svc-86w95 Jan 30 21:29:03.214: INFO: Got endpoints: latency-svc-86w95 [993.959924ms] Jan 30 21:29:03.292: INFO: Created: latency-svc-qcnnc Jan 30 21:29:03.292: INFO: Got endpoints: latency-svc-qcnnc [1.0063577s] Jan 30 21:29:03.326: INFO: Created: latency-svc-2x9zd Jan 30 21:29:03.333: INFO: Got endpoints: latency-svc-2x9zd [975.467691ms] Jan 30 21:29:03.361: INFO: Created: latency-svc-7dv9d Jan 30 21:29:03.364: INFO: Got endpoints: latency-svc-7dv9d [974.847338ms] Jan 30 21:29:03.439: INFO: Created: latency-svc-v2xr7 Jan 30 21:29:03.505: INFO: Got endpoints: latency-svc-v2xr7 [172.401967ms] Jan 30 21:29:03.507: INFO: Created: latency-svc-xnjvd Jan 30 21:29:03.519: INFO: Got endpoints: latency-svc-xnjvd [1.094562288s] Jan 30 21:29:03.582: INFO: Created: latency-svc-x4grz Jan 30 21:29:03.601: INFO: Got endpoints: latency-svc-x4grz [1.089464417s] Jan 30 21:29:03.604: INFO: Created: latency-svc-lkxpv Jan 30 21:29:03.614: INFO: Got endpoints: latency-svc-lkxpv [945.258475ms] Jan 30 21:29:03.661: INFO: Created: latency-svc-jzx94 Jan 30 21:29:03.667: INFO: Got endpoints: latency-svc-jzx94 [980.23451ms] Jan 30 21:29:03.730: INFO: Created: latency-svc-wvznk Jan 30 21:29:03.775: INFO: Created: latency-svc-nhhx6 Jan 30 21:29:03.776: INFO: Got endpoints: latency-svc-wvznk [1.043923172s] Jan 30 21:29:03.785: INFO: Got endpoints: latency-svc-nhhx6 [942.842151ms] Jan 30 21:29:03.815: INFO: Created: latency-svc-vknrs Jan 30 21:29:03.923: INFO: Got endpoints: latency-svc-vknrs [1.027251763s] Jan 30 21:29:03.978: INFO: Created: latency-svc-h6dvr Jan 30 21:29:04.020: INFO: Created: latency-svc-c625r Jan 30 21:29:04.024: INFO: Got endpoints: latency-svc-h6dvr [1.061526746s] Jan 30 21:29:04.084: INFO: Got endpoints: latency-svc-c625r [1.088098568s] Jan 30 21:29:04.094: INFO: Created: latency-svc-j7hjl Jan 30 21:29:04.101: INFO: Got endpoints: latency-svc-j7hjl [1.060955372s] Jan 30 21:29:04.129: INFO: Created: latency-svc-29t5n Jan 30 21:29:04.130: INFO: Got endpoints: latency-svc-29t5n [995.316332ms] Jan 30 21:29:04.147: INFO: Created: latency-svc-629cf Jan 30 21:29:04.150: INFO: Got endpoints: latency-svc-629cf [936.468575ms] Jan 30 21:29:04.170: INFO: Created: latency-svc-sgcr8 Jan 30 21:29:04.174: INFO: Got endpoints: latency-svc-sgcr8 [882.139887ms] Jan 30 21:29:04.236: INFO: Created: latency-svc-glhvg Jan 30 21:29:04.238: INFO: Got endpoints: latency-svc-glhvg [873.817924ms] Jan 30 21:29:04.318: INFO: Created: latency-svc-4fdhj Jan 30 21:29:04.320: INFO: Got endpoints: latency-svc-4fdhj [814.730537ms] Jan 30 21:29:04.397: INFO: Created: latency-svc-rmt2r Jan 30 21:29:04.403: INFO: Got endpoints: latency-svc-rmt2r [884.031472ms] Jan 30 21:29:04.417: INFO: Created: latency-svc-xgzvq Jan 30 21:29:04.423: INFO: Got endpoints: latency-svc-xgzvq [821.975015ms] Jan 30 21:29:04.441: INFO: Created: latency-svc-knkq7 Jan 30 21:29:04.451: INFO: Got endpoints: latency-svc-knkq7 [836.315305ms] Jan 30 21:29:04.537: INFO: Created: latency-svc-dshbw Jan 30 21:29:04.539: INFO: Got endpoints: latency-svc-dshbw [872.500423ms] Jan 30 21:29:04.566: INFO: Created: latency-svc-vtwgw Jan 30 21:29:04.577: INFO: Got endpoints: latency-svc-vtwgw [801.316238ms] Jan 30 21:29:04.610: INFO: Created: latency-svc-knvfb Jan 30 21:29:04.657: INFO: Got endpoints: latency-svc-knvfb [871.618833ms] Jan 30 21:29:04.683: INFO: Created: latency-svc-wfpfp Jan 30 21:29:04.690: INFO: Got endpoints: latency-svc-wfpfp [766.856634ms] Jan 30 21:29:04.707: INFO: Created: latency-svc-2dbth Jan 30 21:29:04.713: INFO: Got endpoints: latency-svc-2dbth [688.572257ms] Jan 30 21:29:04.742: INFO: Created: latency-svc-4xzmg Jan 30 21:29:04.793: INFO: Got endpoints: latency-svc-4xzmg [708.096827ms] Jan 30 21:29:04.821: INFO: Created: latency-svc-5hx5h Jan 30 21:29:04.828: INFO: Got endpoints: latency-svc-5hx5h [726.751304ms] Jan 30 21:29:04.861: INFO: Created: latency-svc-44sqn Jan 30 21:29:04.862: INFO: Got endpoints: latency-svc-44sqn [732.33866ms] Jan 30 21:29:04.889: INFO: Created: latency-svc-srqlj Jan 30 21:29:04.967: INFO: Got endpoints: latency-svc-srqlj [816.141758ms] Jan 30 21:29:04.989: INFO: Created: latency-svc-v6jrf Jan 30 21:29:05.008: INFO: Got endpoints: latency-svc-v6jrf [833.811554ms] Jan 30 21:29:05.038: INFO: Created: latency-svc-c556j Jan 30 21:29:05.040: INFO: Got endpoints: latency-svc-c556j [802.326409ms] Jan 30 21:29:05.089: INFO: Created: latency-svc-pvwls Jan 30 21:29:05.092: INFO: Got endpoints: latency-svc-pvwls [771.20727ms] Jan 30 21:29:05.115: INFO: Created: latency-svc-27q8x Jan 30 21:29:05.126: INFO: Got endpoints: latency-svc-27q8x [722.499774ms] Jan 30 21:29:05.150: INFO: Created: latency-svc-k6vwx Jan 30 21:29:05.168: INFO: Got endpoints: latency-svc-k6vwx [744.033408ms] Jan 30 21:29:05.227: INFO: Created: latency-svc-nx8hg Jan 30 21:29:05.282: INFO: Got endpoints: latency-svc-nx8hg [830.950519ms] Jan 30 21:29:05.287: INFO: Created: latency-svc-w5gx6 Jan 30 21:29:05.297: INFO: Got endpoints: latency-svc-w5gx6 [757.55198ms] Jan 30 21:29:05.390: INFO: Created: latency-svc-p7lq5 Jan 30 21:29:05.412: INFO: Got endpoints: latency-svc-p7lq5 [834.685555ms] Jan 30 21:29:05.440: INFO: Created: latency-svc-vf9sv Jan 30 21:29:05.443: INFO: Got endpoints: latency-svc-vf9sv [785.426778ms] Jan 30 21:29:05.462: INFO: Created: latency-svc-xcjn7 Jan 30 21:29:05.471: INFO: Got endpoints: latency-svc-xcjn7 [780.612878ms] Jan 30 21:29:05.523: INFO: Created: latency-svc-kkgsj Jan 30 21:29:05.533: INFO: Got endpoints: latency-svc-kkgsj [820.045821ms] Jan 30 21:29:05.556: INFO: Created: latency-svc-xqp7p Jan 30 21:29:05.581: INFO: Created: latency-svc-2mb62 Jan 30 21:29:05.584: INFO: Got endpoints: latency-svc-xqp7p [791.107909ms] Jan 30 21:29:05.588: INFO: Got endpoints: latency-svc-2mb62 [759.43434ms] Jan 30 21:29:05.604: INFO: Created: latency-svc-hpnpd Jan 30 21:29:05.671: INFO: Got endpoints: latency-svc-hpnpd [809.227909ms] Jan 30 21:29:05.678: INFO: Created: latency-svc-r8qw8 Jan 30 21:29:05.690: INFO: Got endpoints: latency-svc-r8qw8 [722.568539ms] Jan 30 21:29:05.715: INFO: Created: latency-svc-bgfl4 Jan 30 21:29:05.742: INFO: Got endpoints: latency-svc-bgfl4 [733.804212ms] Jan 30 21:29:05.745: INFO: Created: latency-svc-tq5hh Jan 30 21:29:05.759: INFO: Got endpoints: latency-svc-tq5hh [719.082647ms] Jan 30 21:29:05.816: INFO: Created: latency-svc-4d56v Jan 30 21:29:05.819: INFO: Got endpoints: latency-svc-4d56v [726.836914ms] Jan 30 21:29:05.853: INFO: Created: latency-svc-76wvd Jan 30 21:29:05.861: INFO: Got endpoints: latency-svc-76wvd [735.291062ms] Jan 30 21:29:05.896: INFO: Created: latency-svc-wv9d9 Jan 30 21:29:05.902: INFO: Got endpoints: latency-svc-wv9d9 [733.962997ms] Jan 30 21:29:05.953: INFO: Created: latency-svc-s5d76 Jan 30 21:29:05.971: INFO: Got endpoints: latency-svc-s5d76 [688.653261ms] Jan 30 21:29:05.973: INFO: Created: latency-svc-jpmwp Jan 30 21:29:05.975: INFO: Got endpoints: latency-svc-jpmwp [678.372457ms] Jan 30 21:29:06.009: INFO: Created: latency-svc-x9rkr Jan 30 21:29:06.012: INFO: Got endpoints: latency-svc-x9rkr [599.517938ms] Jan 30 21:29:06.030: INFO: Created: latency-svc-hg95c Jan 30 21:29:06.035: INFO: Got endpoints: latency-svc-hg95c [591.820209ms] Jan 30 21:29:06.099: INFO: Created: latency-svc-26492 Jan 30 21:29:06.120: INFO: Created: latency-svc-vfxvh Jan 30 21:29:06.121: INFO: Got endpoints: latency-svc-26492 [649.615508ms] Jan 30 21:29:06.139: INFO: Got endpoints: latency-svc-vfxvh [605.245931ms] Jan 30 21:29:06.163: INFO: Created: latency-svc-j4vzw Jan 30 21:29:06.172: INFO: Got endpoints: latency-svc-j4vzw [588.433608ms] Jan 30 21:29:06.197: INFO: Created: latency-svc-vr94f Jan 30 21:29:06.271: INFO: Got endpoints: latency-svc-vr94f [683.423396ms] Jan 30 21:29:06.373: INFO: Created: latency-svc-tljzq Jan 30 21:29:06.375: INFO: Got endpoints: latency-svc-tljzq [702.992087ms] Jan 30 21:29:06.425: INFO: Created: latency-svc-764kp Jan 30 21:29:06.425: INFO: Got endpoints: latency-svc-764kp [735.020074ms] Jan 30 21:29:06.471: INFO: Created: latency-svc-kdrpr Jan 30 21:29:06.524: INFO: Got endpoints: latency-svc-kdrpr [781.535412ms] Jan 30 21:29:06.529: INFO: Created: latency-svc-wznfl Jan 30 21:29:06.537: INFO: Got endpoints: latency-svc-wznfl [777.425976ms] Jan 30 21:29:06.569: INFO: Created: latency-svc-nqmq7 Jan 30 21:29:06.593: INFO: Got endpoints: latency-svc-nqmq7 [774.038348ms] Jan 30 21:29:06.596: INFO: Created: latency-svc-vrfvg Jan 30 21:29:06.613: INFO: Got endpoints: latency-svc-vrfvg [751.257781ms] Jan 30 21:29:06.683: INFO: Created: latency-svc-skbvv Jan 30 21:29:06.706: INFO: Got endpoints: latency-svc-skbvv [803.959982ms] Jan 30 21:29:06.711: INFO: Created: latency-svc-ptbv4 Jan 30 21:29:06.731: INFO: Got endpoints: latency-svc-ptbv4 [759.280053ms] Jan 30 21:29:06.763: INFO: Created: latency-svc-9m2t4 Jan 30 21:29:06.769: INFO: Got endpoints: latency-svc-9m2t4 [793.628055ms] Jan 30 21:29:06.825: INFO: Created: latency-svc-2qxs8 Jan 30 21:29:06.836: INFO: Got endpoints: latency-svc-2qxs8 [823.823644ms] Jan 30 21:29:06.885: INFO: Created: latency-svc-2lgnk Jan 30 21:29:07.029: INFO: Got endpoints: latency-svc-2lgnk [993.629958ms] Jan 30 21:29:07.034: INFO: Created: latency-svc-nqhrg Jan 30 21:29:07.039: INFO: Got endpoints: latency-svc-nqhrg [918.918768ms] Jan 30 21:29:07.084: INFO: Created: latency-svc-xz449 Jan 30 21:29:07.092: INFO: Got endpoints: latency-svc-xz449 [952.981804ms] Jan 30 21:29:07.174: INFO: Created: latency-svc-4wskm Jan 30 21:29:07.184: INFO: Got endpoints: latency-svc-4wskm [1.011149095s] Jan 30 21:29:07.232: INFO: Created: latency-svc-pwkjx Jan 30 21:29:07.235: INFO: Got endpoints: latency-svc-pwkjx [963.998106ms] Jan 30 21:29:07.364: INFO: Created: latency-svc-9vqdt Jan 30 21:29:07.459: INFO: Got endpoints: latency-svc-9vqdt [1.083995583s] Jan 30 21:29:07.465: INFO: Created: latency-svc-hphlg Jan 30 21:29:07.517: INFO: Got endpoints: latency-svc-hphlg [1.092018446s] Jan 30 21:29:07.548: INFO: Created: latency-svc-nklcw Jan 30 21:29:07.554: INFO: Got endpoints: latency-svc-nklcw [1.029202888s] Jan 30 21:29:07.579: INFO: Created: latency-svc-csh5g Jan 30 21:29:07.591: INFO: Got endpoints: latency-svc-csh5g [1.053601865s] Jan 30 21:29:07.744: INFO: Created: latency-svc-lwc2q Jan 30 21:29:07.777: INFO: Created: latency-svc-98n6p Jan 30 21:29:07.779: INFO: Got endpoints: latency-svc-lwc2q [1.185604805s] Jan 30 21:29:07.810: INFO: Got endpoints: latency-svc-98n6p [1.19722545s] Jan 30 21:29:07.895: INFO: Created: latency-svc-6qf4z Jan 30 21:29:07.928: INFO: Got endpoints: latency-svc-6qf4z [1.222060012s] Jan 30 21:29:07.929: INFO: Created: latency-svc-mhfcz Jan 30 21:29:07.938: INFO: Got endpoints: latency-svc-mhfcz [1.207541862s] Jan 30 21:29:07.973: INFO: Created: latency-svc-vzff8 Jan 30 21:29:08.036: INFO: Got endpoints: latency-svc-vzff8 [1.266236453s] Jan 30 21:29:08.051: INFO: Created: latency-svc-sdc2m Jan 30 21:29:08.054: INFO: Got endpoints: latency-svc-sdc2m [1.217909892s] Jan 30 21:29:08.076: INFO: Created: latency-svc-8fb7b Jan 30 21:29:08.079: INFO: Got endpoints: latency-svc-8fb7b [1.049960939s] Jan 30 21:29:08.099: INFO: Created: latency-svc-lvplr Jan 30 21:29:08.124: INFO: Got endpoints: latency-svc-lvplr [1.084779096s] Jan 30 21:29:08.127: INFO: Created: latency-svc-nzk7g Jan 30 21:29:08.135: INFO: Got endpoints: latency-svc-nzk7g [1.042954269s] Jan 30 21:29:08.206: INFO: Created: latency-svc-5bf9l Jan 30 21:29:08.222: INFO: Created: latency-svc-vtg6p Jan 30 21:29:08.222: INFO: Got endpoints: latency-svc-5bf9l [1.038673364s] Jan 30 21:29:08.266: INFO: Got endpoints: latency-svc-vtg6p [1.030900444s] Jan 30 21:29:08.267: INFO: Created: latency-svc-fx9n7 Jan 30 21:29:08.347: INFO: Got endpoints: latency-svc-fx9n7 [887.883094ms] Jan 30 21:29:08.350: INFO: Created: latency-svc-9ggsg Jan 30 21:29:08.364: INFO: Got endpoints: latency-svc-9ggsg [846.838195ms] Jan 30 21:29:08.364: INFO: Latencies: [99.234151ms 123.194613ms 150.443607ms 172.401967ms 284.314948ms 303.031334ms 415.956561ms 446.402252ms 453.345019ms 488.12657ms 588.433608ms 591.820209ms 599.517938ms 605.245931ms 649.615508ms 666.415869ms 678.372457ms 683.423396ms 688.572257ms 688.653261ms 690.102584ms 702.992087ms 708.096827ms 719.082647ms 722.499774ms 722.568539ms 726.751304ms 726.836914ms 732.297067ms 732.33866ms 733.804212ms 733.962997ms 735.020074ms 735.291062ms 744.033408ms 745.866192ms 751.257781ms 757.55198ms 759.280053ms 759.43434ms 761.242685ms 766.856634ms 771.20727ms 771.300167ms 774.038348ms 777.425976ms 777.547312ms 778.186408ms 778.750398ms 780.612878ms 781.535412ms 785.380908ms 785.426778ms 791.107909ms 792.215019ms 793.628055ms 795.25128ms 796.844866ms 800.754446ms 801.316238ms 802.326409ms 803.408388ms 803.959982ms 804.318289ms 806.619651ms 807.73752ms 808.493447ms 809.227909ms 813.672692ms 814.730537ms 815.844838ms 816.141758ms 818.233239ms 818.628418ms 820.045821ms 821.975015ms 823.823644ms 826.111737ms 829.83162ms 830.36425ms 830.950519ms 831.105323ms 831.269231ms 833.293465ms 833.811554ms 834.685555ms 836.315305ms 842.123203ms 843.063669ms 846.838195ms 849.64043ms 852.625512ms 857.475914ms 857.998522ms 859.347087ms 861.44605ms 867.02701ms 869.635348ms 871.618833ms 872.500423ms 873.817924ms 875.393288ms 875.417949ms 877.986759ms 879.438929ms 880.767166ms 881.899177ms 882.139887ms 884.031472ms 887.883094ms 895.048167ms 895.049998ms 898.815396ms 899.053371ms 901.865782ms 903.877892ms 914.591706ms 915.816569ms 915.845756ms 918.829128ms 918.918768ms 930.797318ms 931.388372ms 936.468575ms 937.148388ms 942.842151ms 944.560918ms 944.770521ms 945.258475ms 947.861174ms 952.981804ms 963.998106ms 974.847338ms 975.467691ms 980.23451ms 993.629958ms 993.959924ms 994.695428ms 995.316332ms 1.0063577s 1.011149095s 1.015187505s 1.027251763s 1.029202888s 1.030900444s 1.038673364s 1.042954269s 1.043923172s 1.049960939s 1.051488077s 1.053601865s 1.060955372s 1.061526746s 1.072797944s 1.074495106s 1.079648458s 1.083995583s 1.084028806s 1.084779096s 1.088098568s 1.089464417s 1.092018446s 1.094562288s 1.128486664s 1.155339684s 1.164960783s 1.167357241s 1.172143508s 1.183516067s 1.185604805s 1.19722545s 1.207541862s 1.217909892s 1.222060012s 1.234562678s 1.250606741s 1.264534684s 1.266236453s 1.297192752s 1.344235842s 1.353569292s 1.35387385s 1.357276021s 1.36594417s 1.372368237s 1.374239834s 1.375855306s 1.377468245s 1.384803107s 1.385250857s 1.394611997s 1.395268988s 1.400883247s 1.405640386s 1.410386157s 1.439388729s 1.444309835s 1.447193243s 1.494519621s 1.541053001s] Jan 30 21:29:08.365: INFO: 50 %ile: 873.817924ms Jan 30 21:29:08.365: INFO: 90 %ile: 1.353569292s Jan 30 21:29:08.365: INFO: 99 %ile: 1.494519621s Jan 30 21:29:08.365: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:29:08.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1976" for this suite. • [SLOW TEST:20.917 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":49,"skipped":784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:29:08.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:29:09.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:29:11.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:29:13.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:29:15.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:29:17.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:29:19.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716016549, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:29:22.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 30 21:29:23.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 30 21:29:24.508: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:29:34.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1273" for this suite. STEP: Destroying namespace "webhook-1273-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:26.622 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":50,"skipped":820,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:29:35.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:29:35.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb" in namespace "projected-7958" to be "success or failure" Jan 30 21:29:35.194: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 89.386295ms Jan 30 21:29:37.200: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095360947s Jan 30 21:29:39.209: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104617218s Jan 30 21:29:41.217: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112819563s Jan 30 21:29:43.223: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119077106s STEP: Saw pod success Jan 30 21:29:43.224: INFO: Pod "downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb" satisfied condition "success or failure" Jan 30 21:29:43.227: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb container client-container: STEP: delete the pod Jan 30 21:29:44.023: INFO: Waiting for pod downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb to disappear Jan 30 21:29:44.040: INFO: Pod downwardapi-volume-1b41404c-9e9e-4376-8cdb-b6f3ac20a3eb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:29:44.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7958" for this suite. • [SLOW TEST:9.219 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":833,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:29:44.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ff033e71-c72e-44fc-a427-d8e1a822fb40 STEP: Creating a pod to test consume secrets Jan 30 21:29:44.403: INFO: Waiting up to 5m0s for pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e" in namespace "secrets-9753" to be "success or failure" Jan 30 21:29:44.432: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.269408ms Jan 30 21:29:46.440: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036464422s Jan 30 21:29:48.487: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083214293s Jan 30 21:29:50.492: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088724673s Jan 30 21:29:52.547: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143977558s STEP: Saw pod success Jan 30 21:29:52.548: INFO: Pod "pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e" satisfied condition "success or failure" Jan 30 21:29:52.553: INFO: Trying to get logs from node jerma-node pod pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e container secret-volume-test: STEP: delete the pod Jan 30 21:29:52.641: INFO: Waiting for pod pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e to disappear Jan 30 21:29:52.707: INFO: Pod pod-secrets-fb972e36-0689-4971-bab2-f3c907ec884e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:29:52.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9753" for this suite. • [SLOW TEST:8.509 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":852,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:29:52.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 30 21:29:52.909: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 30 21:29:52.927: INFO: Waiting for terminating namespaces to be deleted... Jan 30 21:29:52.935: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 30 21:29:52.949: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.949: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 21:29:52.949: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 30 21:29:52.949: INFO: Container weave ready: true, restart count 1 Jan 30 21:29:52.949: INFO: Container weave-npc ready: true, restart count 0 Jan 30 21:29:52.949: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 30 21:29:52.986: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container coredns ready: true, restart count 0 Jan 30 21:29:52.986: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container coredns ready: true, restart count 0 Jan 30 21:29:52.986: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 30 21:29:52.986: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 21:29:52.986: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 30 21:29:52.986: INFO: Container weave ready: true, restart count 0 Jan 30 21:29:52.986: INFO: Container weave-npc ready: true, restart count 0 Jan 30 21:29:52.986: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container kube-scheduler ready: true, restart count 4 Jan 30 21:29:52.986: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 21:29:52.986: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 30 21:29:52.986: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15eec76da900b3f4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:29:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-988" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":53,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:29:54.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 30 21:29:54.155: INFO: Waiting up to 5m0s for pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1" in namespace "emptydir-5173" to be "success or failure" Jan 30 21:29:54.159: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107037ms Jan 30 21:29:56.166: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01076185s Jan 30 21:29:58.224: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068900156s Jan 30 21:30:00.231: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075885203s Jan 30 21:30:02.238: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083112698s STEP: Saw pod success Jan 30 21:30:02.238: INFO: Pod "pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1" satisfied condition "success or failure" Jan 30 21:30:02.242: INFO: Trying to get logs from node jerma-node pod pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1 container test-container: STEP: delete the pod Jan 30 21:30:02.287: INFO: Waiting for pod pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1 to disappear Jan 30 21:30:02.296: INFO: Pod pod-b9bef087-7cb5-4e40-b721-78f0c84e04c1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:30:02.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5173" for this suite. • [SLOW TEST:8.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":875,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:30:02.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 30 21:30:02.413: INFO: Waiting up to 5m0s for pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b" in namespace "emptydir-2835" to be "success or failure" Jan 30 21:30:02.442: INFO: Pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.639726ms Jan 30 21:30:04.473: INFO: Pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060446251s Jan 30 21:30:06.484: INFO: Pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070793825s Jan 30 21:30:08.534: INFO: Pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121208763s STEP: Saw pod success Jan 30 21:30:08.534: INFO: Pod "pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b" satisfied condition "success or failure" Jan 30 21:30:08.539: INFO: Trying to get logs from node jerma-node pod pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b container test-container: STEP: delete the pod Jan 30 21:30:08.601: INFO: Waiting for pod pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b to disappear Jan 30 21:30:08.619: INFO: Pod pod-3a61e362-e817-4ffa-b8da-9aeb3bc2276b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:30:08.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2835" for this suite. • [SLOW TEST:6.371 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":882,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:30:08.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:30:19.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1080" for this suite. • [SLOW TEST:11.168 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":56,"skipped":889,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:30:19.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0130 21:30:33.690635 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 21:30:33.691: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:30:33.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5378" for this suite. • [SLOW TEST:14.558 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":57,"skipped":921,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:30:34.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:30:35.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 30 21:30:37.186: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:36Z generation:1 name:name1 resourceVersion:5370567 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d2f273-a85b-4bdd-b5cb-f747aede7cae] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 30 21:30:47.332: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:47Z generation:1 name:name2 resourceVersion:5370625 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4af6c31c-9780-4139-9deb-d182ee3b2f0d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 30 21:30:57.345: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:36Z generation:2 name:name1 resourceVersion:5370653 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d2f273-a85b-4bdd-b5cb-f747aede7cae] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 30 21:31:07.357: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:47Z generation:2 name:name2 resourceVersion:5370677 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4af6c31c-9780-4139-9deb-d182ee3b2f0d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 30 21:31:17.375: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:36Z generation:2 name:name1 resourceVersion:5370699 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f3d2f273-a85b-4bdd-b5cb-f747aede7cae] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 30 21:31:27.402: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-30T21:30:47Z generation:2 name:name2 resourceVersion:5370723 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4af6c31c-9780-4139-9deb-d182ee3b2f0d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:31:37.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9865" for this suite. • [SLOW TEST:63.521 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":58,"skipped":927,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:31:37.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:31:37.998: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:31:39.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8921" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":59,"skipped":947,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:31:39.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0130 21:31:45.474064 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 30 21:31:45.474: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:31:45.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-862" for this suite. • [SLOW TEST:6.387 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":60,"skipped":955,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:31:45.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-0796ddf8-9ce9-4c20-8b79-e14ab6c75942 STEP: Creating a pod to test consume secrets Jan 30 21:31:46.276: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2" in namespace "projected-9847" to be "success or failure" Jan 30 21:31:46.536: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 259.794844ms Jan 30 21:31:48.546: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269239215s Jan 30 21:31:50.632: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355578151s Jan 30 21:31:52.643: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.366472762s Jan 30 21:31:54.692: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.415555643s Jan 30 21:31:56.714: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437608042s Jan 30 21:31:58.720: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.443127285s Jan 30 21:32:00.725: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.448063426s STEP: Saw pod success Jan 30 21:32:00.725: INFO: Pod "pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2" satisfied condition "success or failure" Jan 30 21:32:00.727: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2 container projected-secret-volume-test: STEP: delete the pod Jan 30 21:32:00.926: INFO: Waiting for pod pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2 to disappear Jan 30 21:32:00.941: INFO: Pod pod-projected-secrets-2b9aa8ac-2883-4ba3-973a-14957e8269c2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:00.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9847" for this suite. • [SLOW TEST:15.457 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":959,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:00.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 30 21:32:01.040: INFO: Waiting up to 5m0s for pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a" in namespace "emptydir-6991" to be "success or failure" Jan 30 21:32:01.062: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.147505ms Jan 30 21:32:03.074: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033556525s Jan 30 21:32:05.172: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132416284s Jan 30 21:32:07.182: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141581639s Jan 30 21:32:09.189: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149286132s STEP: Saw pod success Jan 30 21:32:09.189: INFO: Pod "pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a" satisfied condition "success or failure" Jan 30 21:32:09.194: INFO: Trying to get logs from node jerma-node pod pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a container test-container: STEP: delete the pod Jan 30 21:32:09.257: INFO: Waiting for pod pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a to disappear Jan 30 21:32:09.271: INFO: Pod pod-bfa57a20-d8ca-4bba-a263-4f929e1ba52a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:09.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6991" for this suite. • [SLOW TEST:8.372 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":966,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:09.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:09.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2867" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":63,"skipped":972,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:09.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 30 21:32:09.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6681' Jan 30 21:32:09.714: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 21:32:09.714: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jan 30 21:32:09.737: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-7sqf9] Jan 30 21:32:09.738: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-7sqf9" in namespace "kubectl-6681" to be "running and ready" Jan 30 21:32:09.785: INFO: Pod "e2e-test-httpd-rc-7sqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 47.886003ms Jan 30 21:32:11.796: INFO: Pod "e2e-test-httpd-rc-7sqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058008273s Jan 30 21:32:13.803: INFO: Pod "e2e-test-httpd-rc-7sqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065629881s Jan 30 21:32:15.811: INFO: Pod "e2e-test-httpd-rc-7sqf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073931374s Jan 30 21:32:17.822: INFO: Pod "e2e-test-httpd-rc-7sqf9": Phase="Running", Reason="", readiness=true. Elapsed: 8.084752678s Jan 30 21:32:17.823: INFO: Pod "e2e-test-httpd-rc-7sqf9" satisfied condition "running and ready" Jan 30 21:32:17.823: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-7sqf9] Jan 30 21:32:17.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6681' Jan 30 21:32:18.054: INFO: stderr: "" Jan 30 21:32:18.055: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Thu Jan 30 21:32:14.691508 2020] [mpm_event:notice] [pid 1:tid 140490775120744] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Jan 30 21:32:14.691561 2020] [core:notice] [pid 1:tid 140490775120744] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 30 21:32:18.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6681' Jan 30 21:32:18.148: INFO: stderr: "" Jan 30 21:32:18.148: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6681" for this suite. • [SLOW TEST:8.693 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":64,"skipped":980,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:18.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:25.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1217" for this suite. • [SLOW TEST:7.307 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":65,"skipped":989,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:25.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:32:25.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1884" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":66,"skipped":997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:32:25.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jan 30 21:32:25.724: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 30 21:32:25.752: INFO: Waiting for terminating namespaces to be deleted... Jan 30 21:32:25.757: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 30 21:32:25.769: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.769: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 21:32:25.769: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 30 21:32:25.769: INFO: Container weave ready: true, restart count 1 Jan 30 21:32:25.769: INFO: Container weave-npc ready: true, restart count 0 Jan 30 21:32:25.769: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 30 21:32:25.814: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container coredns ready: true, restart count 0 Jan 30 21:32:25.814: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container coredns ready: true, restart count 0 Jan 30 21:32:25.814: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 30 21:32:25.814: INFO: Container weave ready: true, restart count 0 Jan 30 21:32:25.814: INFO: Container weave-npc ready: true, restart count 0 Jan 30 21:32:25.814: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 30 21:32:25.814: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container kube-proxy ready: true, restart count 0 Jan 30 21:32:25.814: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container kube-scheduler ready: true, restart count 4 Jan 30 21:32:25.814: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.814: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 21:32:25.815: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 30 21:32:25.815: INFO: Container etcd ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-27c2b7db-aec5-4e40-bd18-0140ffdd03ea 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-27c2b7db-aec5-4e40-bd18-0140ffdd03ea off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-27c2b7db-aec5-4e40-bd18-0140ffdd03ea [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:37:40.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8946" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:315.117 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":67,"skipped":1028,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:37:40.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:37:41.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:37:43.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:37:45.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:37:47.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017061, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:37:50.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration Jan 30 21:37:50.706: INFO: Waiting for webhook configuration to be ready... STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:37:50.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-475" for this suite. STEP: Destroying namespace "webhook-475-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.414 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":68,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:37:51.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 30 21:37:51.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4964' Jan 30 21:37:51.555: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 30 21:37:51.555: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Jan 30 21:37:51.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4964' Jan 30 21:37:51.977: INFO: stderr: "" Jan 30 21:37:51.977: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:37:51.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4964" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":69,"skipped":1050,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:37:51.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:37:52.806: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:37:54.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:37:56.838: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:37:58.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:38:00.836: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:38:02.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017072, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:38:05.885: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:38:05.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4669-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:38:07.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6727" for this suite. STEP: Destroying namespace "webhook-6727-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.380 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":70,"skipped":1061,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:38:07.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jan 30 21:38:18.053: INFO: Successfully updated pod "annotationupdate6984d81a-aec0-4b78-b744-0871c7e02902" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:38:20.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9835" for this suite. • [SLOW TEST:12.755 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:38:20.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 30 21:38:28.878: INFO: Successfully updated pod "pod-update-2e4c74b5-9525-4a0f-a738-238429f433bf" STEP: verifying the updated pod is in kubernetes Jan 30 21:38:28.896: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:38:28.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3332" for this suite. • [SLOW TEST:8.952 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:38:29.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 30 21:38:29.277: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:38:44.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2920" for this suite. • [SLOW TEST:15.557 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":73,"skipped":1111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:38:44.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-55f0f39d-ef67-465e-ab53-fea3c42bb056 STEP: Creating a pod to test consume secrets Jan 30 21:38:44.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d" in namespace "projected-9096" to be "success or failure" Jan 30 21:38:44.779: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.351084ms Jan 30 21:38:46.786: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050231966s Jan 30 21:38:48.795: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05892033s Jan 30 21:38:50.803: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066438257s Jan 30 21:38:52.813: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076647866s STEP: Saw pod success Jan 30 21:38:52.813: INFO: Pod "pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d" satisfied condition "success or failure" Jan 30 21:38:52.818: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d container projected-secret-volume-test: STEP: delete the pod Jan 30 21:38:52.874: INFO: Waiting for pod pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d to disappear Jan 30 21:38:52.893: INFO: Pod pod-projected-secrets-366f90c5-1d2e-4755-808f-e29abea6086d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:38:52.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9096" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1139,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:38:52.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 30 21:39:09.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:09.203: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:11.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:11.212: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:13.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:13.212: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:15.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:15.212: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:17.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:17.213: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:19.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:19.210: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:21.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:21.209: INFO: Pod pod-with-poststart-http-hook still exists Jan 30 21:39:23.203: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 30 21:39:23.210: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:39:23.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3804" for this suite. • [SLOW TEST:30.278 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:39:23.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jan 30 21:39:23.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321" in namespace "downward-api-3479" to be "success or failure" Jan 30 21:39:23.421: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321": Phase="Pending", Reason="", readiness=false. Elapsed: 14.666071ms Jan 30 21:39:25.430: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0236248s Jan 30 21:39:27.438: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031628473s Jan 30 21:39:29.445: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038789224s Jan 30 21:39:31.450: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043791795s STEP: Saw pod success Jan 30 21:39:31.450: INFO: Pod "downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321" satisfied condition "success or failure" Jan 30 21:39:31.455: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321 container client-container: STEP: delete the pod Jan 30 21:39:31.507: INFO: Waiting for pod downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321 to disappear Jan 30 21:39:31.514: INFO: Pod downwardapi-volume-faf3e2cd-543c-40d3-95d4-00ef95962321 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:39:31.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3479" for this suite. • [SLOW TEST:8.351 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:39:31.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:39:32.513: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:39:34.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:39:36.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:39:38.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:39:40.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017172, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:39:43.598: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:39:43.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4650" for this suite. STEP: Destroying namespace "webhook-4650-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.317 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":77,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:39:43.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:39:44.016: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b" in namespace "security-context-test-2563" to be "success or failure" Jan 30 21:39:44.020: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472869ms Jan 30 21:39:46.025: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009233242s Jan 30 21:39:48.040: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023841366s Jan 30 21:39:50.577: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560895851s Jan 30 21:39:52.588: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.572201521s Jan 30 21:39:52.588: INFO: Pod "busybox-readonly-false-0d47e7bf-abdb-435d-b0e3-505d3f14d57b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:39:52.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2563" for this suite. • [SLOW TEST:8.732 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:39:52.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4004 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4004 STEP: creating replication controller externalsvc in namespace services-4004 I0130 21:39:52.891610 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4004, replica count: 2 I0130 21:39:55.943090 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:39:58.943698 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:40:01.944257 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0130 21:40:04.945029 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 30 21:40:05.023: INFO: Creating new exec pod Jan 30 21:40:13.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4004 execpodfb5qb -- /bin/sh -x -c nslookup clusterip-service' Jan 30 21:40:15.351: INFO: stderr: "I0130 21:40:15.135359 699 log.go:172] (0xc0000f5760) (0xc0006b0000) Create stream\nI0130 21:40:15.135473 699 log.go:172] (0xc0000f5760) (0xc0006b0000) Stream added, broadcasting: 1\nI0130 21:40:15.140379 699 log.go:172] (0xc0000f5760) Reply frame received for 1\nI0130 21:40:15.140468 699 log.go:172] (0xc0000f5760) (0xc000758000) Create stream\nI0130 21:40:15.140481 699 log.go:172] (0xc0000f5760) (0xc000758000) Stream added, broadcasting: 3\nI0130 21:40:15.142701 699 log.go:172] (0xc0000f5760) Reply frame received for 3\nI0130 21:40:15.142748 699 log.go:172] (0xc0000f5760) (0xc00076e000) Create stream\nI0130 21:40:15.142755 699 log.go:172] (0xc0000f5760) (0xc00076e000) Stream added, broadcasting: 5\nI0130 21:40:15.144283 699 log.go:172] (0xc0000f5760) Reply frame received for 5\nI0130 21:40:15.220217 699 log.go:172] (0xc0000f5760) Data frame received for 5\nI0130 21:40:15.220262 699 log.go:172] (0xc00076e000) (5) Data frame handling\nI0130 21:40:15.220298 699 log.go:172] (0xc00076e000) (5) Data frame sent\nI0130 21:40:15.220321 699 log.go:172] (0xc0000f5760) Data frame received for 5\nI0130 21:40:15.220336 699 log.go:172] (0xc00076e000) (5) Data frame handling\n+ nslookup clusterip-service\nI0130 21:40:15.220404 699 log.go:172] (0xc00076e000) (5) Data frame sent\nI0130 21:40:15.244268 699 log.go:172] (0xc0000f5760) Data frame received for 3\nI0130 21:40:15.244341 699 log.go:172] (0xc000758000) (3) Data frame handling\nI0130 21:40:15.244382 699 log.go:172] (0xc000758000) (3) Data frame sent\nI0130 21:40:15.246278 699 log.go:172] (0xc0000f5760) Data frame received for 3\nI0130 21:40:15.246296 699 log.go:172] (0xc000758000) (3) Data frame handling\nI0130 21:40:15.246313 699 log.go:172] (0xc000758000) (3) Data frame sent\nI0130 21:40:15.332144 699 log.go:172] (0xc0000f5760) Data frame received for 1\nI0130 21:40:15.332292 699 log.go:172] (0xc0000f5760) (0xc000758000) Stream removed, broadcasting: 3\nI0130 21:40:15.332348 699 log.go:172] (0xc0006b0000) (1) Data frame handling\nI0130 21:40:15.332382 699 log.go:172] (0xc0006b0000) (1) Data frame sent\nI0130 21:40:15.332541 699 log.go:172] (0xc0000f5760) (0xc00076e000) Stream removed, broadcasting: 5\nI0130 21:40:15.332592 699 log.go:172] (0xc0000f5760) (0xc0006b0000) Stream removed, broadcasting: 1\nI0130 21:40:15.332622 699 log.go:172] (0xc0000f5760) Go away received\nI0130 21:40:15.333308 699 log.go:172] (0xc0000f5760) (0xc0006b0000) Stream removed, broadcasting: 1\nI0130 21:40:15.333334 699 log.go:172] (0xc0000f5760) (0xc000758000) Stream removed, broadcasting: 3\nI0130 21:40:15.333348 699 log.go:172] (0xc0000f5760) (0xc00076e000) Stream removed, broadcasting: 5\n" Jan 30 21:40:15.351: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4004.svc.cluster.local\tcanonical name = externalsvc.services-4004.svc.cluster.local.\nName:\texternalsvc.services-4004.svc.cluster.local\nAddress: 10.96.64.177\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4004, will wait for the garbage collector to delete the pods Jan 30 21:40:15.415: INFO: Deleting ReplicationController externalsvc took: 8.699566ms Jan 30 21:40:15.716: INFO: Terminating ReplicationController externalsvc pods took: 300.827129ms Jan 30 21:40:33.192: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:40:33.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4004" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:40.617 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":79,"skipped":1242,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:40:33.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jan 30 21:40:33.330: INFO: PodSpec: initContainers in spec.initContainers Jan 30 21:41:30.059: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-50a85e60-f356-4db1-bf70-c434874f44a1", GenerateName:"", Namespace:"init-container-874", SelfLink:"/api/v1/namespaces/init-container-874/pods/pod-init-50a85e60-f356-4db1-bf70-c434874f44a1", UID:"42b26bfc-6bb8-461c-9adc-25821ff93a29", ResourceVersion:"5372943", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716017233, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"330781015"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mddxh", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00205ee00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mddxh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mddxh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mddxh", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0022784a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024db800), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0022786c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002278770)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002278778), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00227877c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017233, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017233, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017233, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017233, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0022333e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027cd730)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027cd7a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://d0f6a186df2ef4a73d75aa4dd4378669c8d7b2abac44ba381690f725249faa0e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002233420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002233400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0022787ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:41:30.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-874" for this suite. • [SLOW TEST:56.838 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":80,"skipped":1248,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:41:30.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:41:38.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5424" for this suite. • [SLOW TEST:8.299 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":81,"skipped":1253,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:41:38.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 30 21:41:39.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 30 21:41:41.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:41:43.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 30 21:41:45.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017299, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 30 21:41:48.957: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:41:48.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:41:50.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2861" for this suite. STEP: Destroying namespace "webhook-2861-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.058 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":82,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:41:50.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-545563f1-989a-42e8-95df-256e4e7c102f STEP: Creating a pod to test consume secrets Jan 30 21:41:50.552: INFO: Waiting up to 5m0s for pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016" in namespace "secrets-3316" to be "success or failure" Jan 30 21:41:50.623: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016": Phase="Pending", Reason="", readiness=false. Elapsed: 70.671343ms Jan 30 21:41:52.631: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078089576s Jan 30 21:41:54.643: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090752288s Jan 30 21:41:56.652: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099758968s Jan 30 21:41:58.665: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112894347s STEP: Saw pod success Jan 30 21:41:58.666: INFO: Pod "pod-secrets-36772a1a-560f-460b-a713-01e034f6a016" satisfied condition "success or failure" Jan 30 21:41:58.673: INFO: Trying to get logs from node jerma-node pod pod-secrets-36772a1a-560f-460b-a713-01e034f6a016 container secret-volume-test: STEP: delete the pod Jan 30 21:41:58.834: INFO: Waiting for pod pod-secrets-36772a1a-560f-460b-a713-01e034f6a016 to disappear Jan 30 21:41:58.839: INFO: Pod pod-secrets-36772a1a-560f-460b-a713-01e034f6a016 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:41:58.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3316" for this suite. • [SLOW TEST:8.409 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:41:58.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 30 21:42:09.580: INFO: Successfully updated pod "adopt-release-4mwwg" STEP: Checking that the Job readopts the Pod Jan 30 21:42:09.580: INFO: Waiting up to 15m0s for pod "adopt-release-4mwwg" in namespace "job-1906" to be "adopted" Jan 30 21:42:09.600: INFO: Pod "adopt-release-4mwwg": Phase="Running", Reason="", readiness=true. Elapsed: 20.495591ms Jan 30 21:42:11.609: INFO: Pod "adopt-release-4mwwg": Phase="Running", Reason="", readiness=true. Elapsed: 2.02921102s Jan 30 21:42:11.609: INFO: Pod "adopt-release-4mwwg" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 30 21:42:12.124: INFO: Successfully updated pod "adopt-release-4mwwg" STEP: Checking that the Job releases the Pod Jan 30 21:42:12.124: INFO: Waiting up to 15m0s for pod "adopt-release-4mwwg" in namespace "job-1906" to be "released" Jan 30 21:42:12.188: INFO: Pod "adopt-release-4mwwg": Phase="Running", Reason="", readiness=true. Elapsed: 63.221181ms Jan 30 21:42:12.188: INFO: Pod "adopt-release-4mwwg" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:12.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1906" for this suite. • [SLOW TEST:13.362 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":84,"skipped":1308,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:12.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:24.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8051" for this suite. • [SLOW TEST:12.224 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1308,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:24.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:31.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8662" for this suite. STEP: Destroying namespace "nsdeletetest-3004" for this suite. Jan 30 21:42:31.893: INFO: Namespace nsdeletetest-3004 was already deleted STEP: Destroying namespace "nsdeletetest-215" for this suite. • [SLOW TEST:7.460 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":86,"skipped":1317,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:31.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:42:32.056: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47" in namespace "security-context-test-2554" to be "success or failure" Jan 30 21:42:32.128: INFO: Pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47": Phase="Pending", Reason="", readiness=false. Elapsed: 71.76166ms Jan 30 21:42:34.133: INFO: Pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077265238s Jan 30 21:42:36.139: INFO: Pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083253596s Jan 30 21:42:38.150: INFO: Pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094386164s Jan 30 21:42:38.151: INFO: Pod "alpine-nnp-false-14897b7a-f70e-47db-857c-eade1f96ec47" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:38.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2554" for this suite. • [SLOW TEST:6.324 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1327,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:38.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-631064f4-f893-4ad4-991a-dabd9d37722f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:48.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7460" for this suite. • [SLOW TEST:10.232 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1330,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:48.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Jan 30 21:42:48.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 30 21:42:48.837: INFO: stderr: "" Jan 30 21:42:48.838: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:48.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4430" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":89,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:48.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Jan 30 21:42:59.026: INFO: Pod pod-hostip-453d5d55-ea00-49d3-9be9-aaa6dfad77f7 has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jan 30 21:42:59.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3898" for this suite. • [SLOW TEST:10.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1384,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jan 30 21:42:59.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jan 30 21:42:59.167: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
apt/
... (200; 25.201162ms)
Jan 30 21:42:59.171: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.653275ms)
Jan 30 21:42:59.175: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.393674ms)
Jan 30 21:42:59.178: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.715405ms)
Jan 30 21:42:59.194: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 16.149019ms)
Jan 30 21:42:59.218: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 23.256725ms)
Jan 30 21:42:59.226: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 7.999207ms)
Jan 30 21:42:59.231: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.168906ms)
Jan 30 21:42:59.239: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 8.014379ms)
Jan 30 21:42:59.243: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.857402ms)
Jan 30 21:42:59.247: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.0861ms)
Jan 30 21:42:59.250: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.65797ms)
Jan 30 21:42:59.254: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.68512ms)
Jan 30 21:42:59.258: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.300617ms)
Jan 30 21:42:59.262: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.719754ms)
Jan 30 21:42:59.266: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.490494ms)
Jan 30 21:42:59.271: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.933075ms)
Jan 30 21:42:59.274: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.806483ms)
Jan 30 21:42:59.279: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.756258ms)
Jan 30 21:42:59.283: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.812237ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:42:59.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5522" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":91,"skipped":1399,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:42:59.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-cf935e58-076f-44da-88c2-a50998fdf677
STEP: Creating a pod to test consume secrets
Jan 30 21:42:59.503: INFO: Waiting up to 5m0s for pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19" in namespace "secrets-9146" to be "success or failure"
Jan 30 21:42:59.533: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19": Phase="Pending", Reason="", readiness=false. Elapsed: 30.116852ms
Jan 30 21:43:01.540: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036685661s
Jan 30 21:43:03.548: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044688947s
Jan 30 21:43:05.555: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051596611s
Jan 30 21:43:07.563: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059836603s
STEP: Saw pod success
Jan 30 21:43:07.563: INFO: Pod "pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19" satisfied condition "success or failure"
Jan 30 21:43:07.567: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19 container secret-volume-test: 
STEP: delete the pod
Jan 30 21:43:07.738: INFO: Waiting for pod pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19 to disappear
Jan 30 21:43:07.753: INFO: Pod pod-secrets-12679bd7-4899-408e-b3b2-be7033443b19 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:43:07.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9146" for this suite.
STEP: Destroying namespace "secret-namespace-6138" for this suite.

• [SLOW TEST:8.655 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1417,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:43:07.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:44:09.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9859" for this suite.

• [SLOW TEST:61.531 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1438,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:44:09.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-61196a12-31ce-4cf7-a259-86ef615c96d1
STEP: Creating a pod to test consume secrets
Jan 30 21:44:09.650: INFO: Waiting up to 5m0s for pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db" in namespace "secrets-3275" to be "success or failure"
Jan 30 21:44:09.669: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db": Phase="Pending", Reason="", readiness=false. Elapsed: 18.416072ms
Jan 30 21:44:11.675: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024916656s
Jan 30 21:44:13.680: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029096595s
Jan 30 21:44:15.687: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036942845s
Jan 30 21:44:17.698: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047565127s
STEP: Saw pod success
Jan 30 21:44:17.698: INFO: Pod "pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db" satisfied condition "success or failure"
Jan 30 21:44:17.703: INFO: Trying to get logs from node jerma-node pod pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db container secret-volume-test: 
STEP: delete the pod
Jan 30 21:44:17.750: INFO: Waiting for pod pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db to disappear
Jan 30 21:44:17.811: INFO: Pod pod-secrets-a9eed7f4-fb35-4cce-863c-6b6ea88722db no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:44:17.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3275" for this suite.

• [SLOW TEST:8.348 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1453,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:44:17.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-3665
STEP: creating replication controller nodeport-test in namespace services-3665
I0130 21:44:18.032693       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3665, replica count: 2
I0130 21:44:21.083296       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:44:24.083659       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:44:27.084005       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:44:30.084634       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 21:44:30.084: INFO: Creating new exec pod
Jan 30 21:44:39.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3665 execpodptjwh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 30 21:44:39.519: INFO: stderr: "I0130 21:44:39.287801     750 log.go:172] (0xc00043e000) (0xc0002c94a0) Create stream\nI0130 21:44:39.287925     750 log.go:172] (0xc00043e000) (0xc0002c94a0) Stream added, broadcasting: 1\nI0130 21:44:39.290458     750 log.go:172] (0xc00043e000) Reply frame received for 1\nI0130 21:44:39.290485     750 log.go:172] (0xc00043e000) (0xc000628000) Create stream\nI0130 21:44:39.290494     750 log.go:172] (0xc00043e000) (0xc000628000) Stream added, broadcasting: 3\nI0130 21:44:39.292095     750 log.go:172] (0xc00043e000) Reply frame received for 3\nI0130 21:44:39.292121     750 log.go:172] (0xc00043e000) (0xc0006c9a40) Create stream\nI0130 21:44:39.292128     750 log.go:172] (0xc00043e000) (0xc0006c9a40) Stream added, broadcasting: 5\nI0130 21:44:39.293411     750 log.go:172] (0xc00043e000) Reply frame received for 5\nI0130 21:44:39.396418     750 log.go:172] (0xc00043e000) Data frame received for 5\nI0130 21:44:39.396575     750 log.go:172] (0xc0006c9a40) (5) Data frame handling\nI0130 21:44:39.396611     750 log.go:172] (0xc0006c9a40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0130 21:44:39.403615     750 log.go:172] (0xc00043e000) Data frame received for 5\nI0130 21:44:39.403762     750 log.go:172] (0xc0006c9a40) (5) Data frame handling\nI0130 21:44:39.403847     750 log.go:172] (0xc0006c9a40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0130 21:44:39.499597     750 log.go:172] (0xc00043e000) (0xc000628000) Stream removed, broadcasting: 3\nI0130 21:44:39.499814     750 log.go:172] (0xc00043e000) Data frame received for 1\nI0130 21:44:39.499823     750 log.go:172] (0xc0002c94a0) (1) Data frame handling\nI0130 21:44:39.499838     750 log.go:172] (0xc0002c94a0) (1) Data frame sent\nI0130 21:44:39.499850     750 log.go:172] (0xc00043e000) (0xc0002c94a0) Stream removed, broadcasting: 1\nI0130 21:44:39.500464     750 log.go:172] (0xc00043e000) (0xc0006c9a40) Stream removed, broadcasting: 5\nI0130 21:44:39.500494     750 log.go:172] (0xc00043e000) (0xc0002c94a0) Stream removed, broadcasting: 1\nI0130 21:44:39.500629     750 log.go:172] (0xc00043e000) (0xc000628000) Stream removed, broadcasting: 3\nI0130 21:44:39.500684     750 log.go:172] (0xc00043e000) (0xc0006c9a40) Stream removed, broadcasting: 5\nI0130 21:44:39.500766     750 log.go:172] (0xc00043e000) Go away received\n"
Jan 30 21:44:39.520: INFO: stdout: ""
Jan 30 21:44:39.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3665 execpodptjwh -- /bin/sh -x -c nc -zv -t -w 2 10.96.217.76 80'
Jan 30 21:44:39.933: INFO: stderr: "I0130 21:44:39.691006     767 log.go:172] (0xc0004a2000) (0xc00085c0a0) Create stream\nI0130 21:44:39.691224     767 log.go:172] (0xc0004a2000) (0xc00085c0a0) Stream added, broadcasting: 1\nI0130 21:44:39.695268     767 log.go:172] (0xc0004a2000) Reply frame received for 1\nI0130 21:44:39.695354     767 log.go:172] (0xc0004a2000) (0xc0007dfb80) Create stream\nI0130 21:44:39.695369     767 log.go:172] (0xc0004a2000) (0xc0007dfb80) Stream added, broadcasting: 3\nI0130 21:44:39.697210     767 log.go:172] (0xc0004a2000) Reply frame received for 3\nI0130 21:44:39.697236     767 log.go:172] (0xc0004a2000) (0xc0004cf5e0) Create stream\nI0130 21:44:39.697245     767 log.go:172] (0xc0004a2000) (0xc0004cf5e0) Stream added, broadcasting: 5\nI0130 21:44:39.698561     767 log.go:172] (0xc0004a2000) Reply frame received for 5\nI0130 21:44:39.775708     767 log.go:172] (0xc0004a2000) Data frame received for 5\nI0130 21:44:39.775778     767 log.go:172] (0xc0004cf5e0) (5) Data frame handling\nI0130 21:44:39.775794     767 log.go:172] (0xc0004cf5e0) (5) Data frame sent\nI0130 21:44:39.775805     767 log.go:172] (0xc0004a2000) Data frame received for 5\nI0130 21:44:39.775809     767 log.go:172] (0xc0004cf5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.217.76 80\nI0130 21:44:39.775825     767 log.go:172] (0xc0004cf5e0) (5) Data frame sent\nI0130 21:44:39.780126     767 log.go:172] (0xc0004a2000) Data frame received for 5\nI0130 21:44:39.780141     767 log.go:172] (0xc0004cf5e0) (5) Data frame handling\nI0130 21:44:39.780150     767 log.go:172] (0xc0004cf5e0) (5) Data frame sent\nConnection to 10.96.217.76 80 port [tcp/http] succeeded!\nI0130 21:44:39.905454     767 log.go:172] (0xc0004a2000) Data frame received for 1\nI0130 21:44:39.905618     767 log.go:172] (0xc00085c0a0) (1) Data frame handling\nI0130 21:44:39.905717     767 log.go:172] (0xc00085c0a0) (1) Data frame sent\nI0130 21:44:39.906232     767 log.go:172] (0xc0004a2000) (0xc00085c0a0) Stream removed, broadcasting: 1\nI0130 21:44:39.906408     767 log.go:172] (0xc0004a2000) (0xc0004cf5e0) Stream removed, broadcasting: 5\nI0130 21:44:39.906887     767 log.go:172] (0xc0004a2000) (0xc0007dfb80) Stream removed, broadcasting: 3\nI0130 21:44:39.907214     767 log.go:172] (0xc0004a2000) Go away received\nI0130 21:44:39.909064     767 log.go:172] (0xc0004a2000) (0xc00085c0a0) Stream removed, broadcasting: 1\nI0130 21:44:39.909176     767 log.go:172] (0xc0004a2000) (0xc0007dfb80) Stream removed, broadcasting: 3\nI0130 21:44:39.909246     767 log.go:172] (0xc0004a2000) (0xc0004cf5e0) Stream removed, broadcasting: 5\n"
Jan 30 21:44:39.934: INFO: stdout: ""
Jan 30 21:44:39.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3665 execpodptjwh -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32247'
Jan 30 21:44:40.399: INFO: stderr: "I0130 21:44:40.198404     788 log.go:172] (0xc000ad6bb0) (0xc000a50280) Create stream\nI0130 21:44:40.198800     788 log.go:172] (0xc000ad6bb0) (0xc000a50280) Stream added, broadcasting: 1\nI0130 21:44:40.206283     788 log.go:172] (0xc000ad6bb0) Reply frame received for 1\nI0130 21:44:40.206418     788 log.go:172] (0xc000ad6bb0) (0xc000a50320) Create stream\nI0130 21:44:40.206442     788 log.go:172] (0xc000ad6bb0) (0xc000a50320) Stream added, broadcasting: 3\nI0130 21:44:40.209930     788 log.go:172] (0xc000ad6bb0) Reply frame received for 3\nI0130 21:44:40.209985     788 log.go:172] (0xc000ad6bb0) (0xc000b80a00) Create stream\nI0130 21:44:40.209996     788 log.go:172] (0xc000ad6bb0) (0xc000b80a00) Stream added, broadcasting: 5\nI0130 21:44:40.213372     788 log.go:172] (0xc000ad6bb0) Reply frame received for 5\nI0130 21:44:40.282243     788 log.go:172] (0xc000ad6bb0) Data frame received for 5\nI0130 21:44:40.282424     788 log.go:172] (0xc000b80a00) (5) Data frame handling\nI0130 21:44:40.282457     788 log.go:172] (0xc000b80a00) (5) Data frame sent\nI0130 21:44:40.282475     788 log.go:172] (0xc000ad6bb0) Data frame received for 5\nI0130 21:44:40.282484     788 log.go:172] (0xc000b80a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.2.250 32247\nConnection to 10.96.2.250 32247 port [tcp/32247] succeeded!\nI0130 21:44:40.282597     788 log.go:172] (0xc000b80a00) (5) Data frame sent\nI0130 21:44:40.384263     788 log.go:172] (0xc000ad6bb0) Data frame received for 1\nI0130 21:44:40.384378     788 log.go:172] (0xc000a50280) (1) Data frame handling\nI0130 21:44:40.384416     788 log.go:172] (0xc000a50280) (1) Data frame sent\nI0130 21:44:40.385013     788 log.go:172] (0xc000ad6bb0) (0xc000a50320) Stream removed, broadcasting: 3\nI0130 21:44:40.385109     788 log.go:172] (0xc000ad6bb0) (0xc000a50280) Stream removed, broadcasting: 1\nI0130 21:44:40.385182     788 log.go:172] (0xc000ad6bb0) (0xc000b80a00) Stream removed, broadcasting: 5\nI0130 21:44:40.385260     788 log.go:172] (0xc000ad6bb0) Go away received\nI0130 21:44:40.387306     788 log.go:172] (0xc000ad6bb0) (0xc000a50280) Stream removed, broadcasting: 1\nI0130 21:44:40.387330     788 log.go:172] (0xc000ad6bb0) (0xc000a50320) Stream removed, broadcasting: 3\nI0130 21:44:40.387340     788 log.go:172] (0xc000ad6bb0) (0xc000b80a00) Stream removed, broadcasting: 5\n"
Jan 30 21:44:40.399: INFO: stdout: ""
Jan 30 21:44:40.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3665 execpodptjwh -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32247'
Jan 30 21:44:40.757: INFO: stderr: "I0130 21:44:40.621074     807 log.go:172] (0xc000a520b0) (0xc0005e7f40) Create stream\nI0130 21:44:40.621343     807 log.go:172] (0xc000a520b0) (0xc0005e7f40) Stream added, broadcasting: 1\nI0130 21:44:40.624381     807 log.go:172] (0xc000a520b0) Reply frame received for 1\nI0130 21:44:40.624504     807 log.go:172] (0xc000a520b0) (0xc0005748c0) Create stream\nI0130 21:44:40.624518     807 log.go:172] (0xc000a520b0) (0xc0005748c0) Stream added, broadcasting: 3\nI0130 21:44:40.625966     807 log.go:172] (0xc000a520b0) Reply frame received for 3\nI0130 21:44:40.625987     807 log.go:172] (0xc000a520b0) (0xc0007048c0) Create stream\nI0130 21:44:40.625995     807 log.go:172] (0xc000a520b0) (0xc0007048c0) Stream added, broadcasting: 5\nI0130 21:44:40.627106     807 log.go:172] (0xc000a520b0) Reply frame received for 5\nI0130 21:44:40.685241     807 log.go:172] (0xc000a520b0) Data frame received for 5\nI0130 21:44:40.685313     807 log.go:172] (0xc0007048c0) (5) Data frame handling\nI0130 21:44:40.685333     807 log.go:172] (0xc0007048c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32247\nI0130 21:44:40.688854     807 log.go:172] (0xc000a520b0) Data frame received for 5\nI0130 21:44:40.688885     807 log.go:172] (0xc0007048c0) (5) Data frame handling\nI0130 21:44:40.688898     807 log.go:172] (0xc0007048c0) (5) Data frame sent\nConnection to 10.96.1.234 32247 port [tcp/32247] succeeded!\nI0130 21:44:40.749160     807 log.go:172] (0xc000a520b0) (0xc0005748c0) Stream removed, broadcasting: 3\nI0130 21:44:40.749266     807 log.go:172] (0xc000a520b0) Data frame received for 1\nI0130 21:44:40.749305     807 log.go:172] (0xc0005e7f40) (1) Data frame handling\nI0130 21:44:40.749325     807 log.go:172] (0xc0005e7f40) (1) Data frame sent\nI0130 21:44:40.749433     807 log.go:172] (0xc000a520b0) (0xc0007048c0) Stream removed, broadcasting: 5\nI0130 21:44:40.749470     807 log.go:172] (0xc000a520b0) (0xc0005e7f40) Stream removed, broadcasting: 1\nI0130 21:44:40.749491     807 log.go:172] (0xc000a520b0) Go away received\nI0130 21:44:40.750341     807 log.go:172] (0xc000a520b0) (0xc0005e7f40) Stream removed, broadcasting: 1\nI0130 21:44:40.750354     807 log.go:172] (0xc000a520b0) (0xc0005748c0) Stream removed, broadcasting: 3\nI0130 21:44:40.750358     807 log.go:172] (0xc000a520b0) (0xc0007048c0) Stream removed, broadcasting: 5\n"
Jan 30 21:44:40.757: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:44:40.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3665" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:22.951 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":95,"skipped":1469,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:44:40.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 21:44:40.841: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:44:46.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4848" for this suite.

• [SLOW TEST:5.920 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":96,"skipped":1472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:44:46.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:45:00.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2504" for this suite.

• [SLOW TEST:13.997 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":97,"skipped":1495,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:45:00.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 30 21:45:00.891: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:45:22.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2952" for this suite.

• [SLOW TEST:21.670 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1499,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:45:22.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0130 21:45:34.179383       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 21:45:34.179: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:45:34.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1641" for this suite.

• [SLOW TEST:11.819 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":99,"skipped":1519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:45:34.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:46:07.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7998" for this suite.
STEP: Destroying namespace "nsdeletetest-6417" for this suite.
Jan 30 21:46:07.588: INFO: Namespace nsdeletetest-6417 was already deleted
STEP: Destroying namespace "nsdeletetest-7372" for this suite.

• [SLOW TEST:33.405 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":100,"skipped":1545,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:46:07.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:46:15.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6604" for this suite.

• [SLOW TEST:8.226 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1546,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:46:15.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 21:46:15.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-517'
Jan 30 21:46:16.550: INFO: stderr: ""
Jan 30 21:46:16.551: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 30 21:46:16.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-517'
Jan 30 21:46:17.119: INFO: stderr: ""
Jan 30 21:46:17.119: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 30 21:46:18.127: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:18.127: INFO: Found 0 / 1
Jan 30 21:46:19.137: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:19.138: INFO: Found 0 / 1
Jan 30 21:46:20.130: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:20.130: INFO: Found 0 / 1
Jan 30 21:46:21.215: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:21.215: INFO: Found 0 / 1
Jan 30 21:46:22.147: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:22.148: INFO: Found 0 / 1
Jan 30 21:46:23.126: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:23.126: INFO: Found 1 / 1
Jan 30 21:46:23.126: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 30 21:46:23.131: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 30 21:46:23.131: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 30 21:46:23.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-skrn9 --namespace=kubectl-517'
Jan 30 21:46:23.304: INFO: stderr: ""
Jan 30 21:46:23.304: INFO: stdout: "Name:         agnhost-master-skrn9\nNamespace:    kubectl-517\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Thu, 30 Jan 2020 21:46:16 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.2\nIPs:\n  IP:           10.44.0.2\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://d609d9316ac3885f7af130ef00467972049b6e85cdff4e7ff3475d5b967ed0c5\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 30 Jan 2020 21:46:22 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9mdb7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-9mdb7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-9mdb7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-517/agnhost-master-skrn9 to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 30 21:46:23.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-517'
Jan 30 21:46:23.487: INFO: stderr: ""
Jan 30 21:46:23.488: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-517\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: agnhost-master-skrn9\n"
Jan 30 21:46:23.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-517'
Jan 30 21:46:23.612: INFO: stderr: ""
Jan 30 21:46:23.612: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-517\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.195.244\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.2:6379\nSession Affinity:  None\nEvents:            \n"
Jan 30 21:46:23.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 30 21:46:23.772: INFO: stderr: ""
Jan 30 21:46:23.772: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Thu, 30 Jan 2020 21:46:21 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 30 Jan 2020 21:43:18 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 30 Jan 2020 21:43:18 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 30 Jan 2020 21:43:18 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 30 Jan 2020 21:43:18 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (4 in total)\n  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26d\n  kube-system                 weave-net-kz8lv                                            20m (0%)      0 (0%)      0 (0%)           0 (0%)         26d\n  kubectl-517                 agnhost-master-skrn9                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\n  kubelet-test-6604           busybox-readonly-fsdedfc722-484f-46a7-8f6c-195ba64a3ed0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 30 21:46:23.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-517'
Jan 30 21:46:24.008: INFO: stderr: ""
Jan 30 21:46:24.009: INFO: stdout: "Name:         kubectl-517\nLabels:       e2e-framework=kubectl\n              e2e-run=4e0e84fb-f3f9-4c79-8dce-3815ab320190\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:46:24.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-517" for this suite.

• [SLOW TEST:8.193 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":102,"skipped":1557,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:46:24.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9951
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 21:46:24.081: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 21:46:58.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.3:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9951 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 21:46:58.226: INFO: >>> kubeConfig: /root/.kube/config
I0130 21:46:58.323879       8 log.go:172] (0xc0024580b0) (0xc0028e2be0) Create stream
I0130 21:46:58.324364       8 log.go:172] (0xc0024580b0) (0xc0028e2be0) Stream added, broadcasting: 1
I0130 21:46:58.331400       8 log.go:172] (0xc0024580b0) Reply frame received for 1
I0130 21:46:58.331540       8 log.go:172] (0xc0024580b0) (0xc002861d60) Create stream
I0130 21:46:58.331568       8 log.go:172] (0xc0024580b0) (0xc002861d60) Stream added, broadcasting: 3
I0130 21:46:58.333772       8 log.go:172] (0xc0024580b0) Reply frame received for 3
I0130 21:46:58.333867       8 log.go:172] (0xc0024580b0) (0xc001ca6640) Create stream
I0130 21:46:58.333884       8 log.go:172] (0xc0024580b0) (0xc001ca6640) Stream added, broadcasting: 5
I0130 21:46:58.338244       8 log.go:172] (0xc0024580b0) Reply frame received for 5
I0130 21:46:58.437329       8 log.go:172] (0xc0024580b0) Data frame received for 3
I0130 21:46:58.437446       8 log.go:172] (0xc002861d60) (3) Data frame handling
I0130 21:46:58.437480       8 log.go:172] (0xc002861d60) (3) Data frame sent
I0130 21:46:58.555183       8 log.go:172] (0xc0024580b0) (0xc002861d60) Stream removed, broadcasting: 3
I0130 21:46:58.555509       8 log.go:172] (0xc0024580b0) Data frame received for 1
I0130 21:46:58.555544       8 log.go:172] (0xc0028e2be0) (1) Data frame handling
I0130 21:46:58.555568       8 log.go:172] (0xc0028e2be0) (1) Data frame sent
I0130 21:46:58.556264       8 log.go:172] (0xc0024580b0) (0xc0028e2be0) Stream removed, broadcasting: 1
I0130 21:46:58.556672       8 log.go:172] (0xc0024580b0) (0xc001ca6640) Stream removed, broadcasting: 5
I0130 21:46:58.556742       8 log.go:172] (0xc0024580b0) Go away received
I0130 21:46:58.557253       8 log.go:172] (0xc0024580b0) (0xc0028e2be0) Stream removed, broadcasting: 1
I0130 21:46:58.557294       8 log.go:172] (0xc0024580b0) (0xc002861d60) Stream removed, broadcasting: 3
I0130 21:46:58.557374       8 log.go:172] (0xc0024580b0) (0xc001ca6640) Stream removed, broadcasting: 5
Jan 30 21:46:58.557: INFO: Found all expected endpoints: [netserver-0]
Jan 30 21:46:58.564: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9951 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 21:46:58.564: INFO: >>> kubeConfig: /root/.kube/config
I0130 21:46:58.610920       8 log.go:172] (0xc001d4ac60) (0xc00240d040) Create stream
I0130 21:46:58.611236       8 log.go:172] (0xc001d4ac60) (0xc00240d040) Stream added, broadcasting: 1
I0130 21:46:58.624892       8 log.go:172] (0xc001d4ac60) Reply frame received for 1
I0130 21:46:58.625173       8 log.go:172] (0xc001d4ac60) (0xc0020e0c80) Create stream
I0130 21:46:58.625189       8 log.go:172] (0xc001d4ac60) (0xc0020e0c80) Stream added, broadcasting: 3
I0130 21:46:58.629169       8 log.go:172] (0xc001d4ac60) Reply frame received for 3
I0130 21:46:58.629255       8 log.go:172] (0xc001d4ac60) (0xc00240d180) Create stream
I0130 21:46:58.629291       8 log.go:172] (0xc001d4ac60) (0xc00240d180) Stream added, broadcasting: 5
I0130 21:46:58.631455       8 log.go:172] (0xc001d4ac60) Reply frame received for 5
I0130 21:46:58.741630       8 log.go:172] (0xc001d4ac60) Data frame received for 3
I0130 21:46:58.742146       8 log.go:172] (0xc0020e0c80) (3) Data frame handling
I0130 21:46:58.742224       8 log.go:172] (0xc0020e0c80) (3) Data frame sent
I0130 21:46:58.827690       8 log.go:172] (0xc001d4ac60) (0xc0020e0c80) Stream removed, broadcasting: 3
I0130 21:46:58.827785       8 log.go:172] (0xc001d4ac60) Data frame received for 1
I0130 21:46:58.827799       8 log.go:172] (0xc00240d040) (1) Data frame handling
I0130 21:46:58.827812       8 log.go:172] (0xc00240d040) (1) Data frame sent
I0130 21:46:58.827824       8 log.go:172] (0xc001d4ac60) (0xc00240d040) Stream removed, broadcasting: 1
I0130 21:46:58.829457       8 log.go:172] (0xc001d4ac60) (0xc00240d180) Stream removed, broadcasting: 5
I0130 21:46:58.829492       8 log.go:172] (0xc001d4ac60) Go away received
I0130 21:46:58.829860       8 log.go:172] (0xc001d4ac60) (0xc00240d040) Stream removed, broadcasting: 1
I0130 21:46:58.829887       8 log.go:172] (0xc001d4ac60) (0xc0020e0c80) Stream removed, broadcasting: 3
I0130 21:46:58.829897       8 log.go:172] (0xc001d4ac60) (0xc00240d180) Stream removed, broadcasting: 5
Jan 30 21:46:58.829: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:46:58.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9951" for this suite.

• [SLOW TEST:34.820 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1565,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:46:58.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4461
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jan 30 21:46:58.948: INFO: Found 0 stateful pods, waiting for 3
Jan 30 21:47:09.017: INFO: Found 2 stateful pods, waiting for 3
Jan 30 21:47:18.958: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:47:18.958: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:47:18.958: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 21:47:28.960: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:47:28.960: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:47:28.960: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 30 21:47:29.001: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 30 21:47:39.101: INFO: Updating stateful set ss2
Jan 30 21:47:39.199: INFO: Waiting for Pod statefulset-4461/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 30 21:47:49.219: INFO: Waiting for Pod statefulset-4461/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 30 21:47:59.437: INFO: Found 2 stateful pods, waiting for 3
Jan 30 21:48:09.447: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:48:09.447: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:48:09.447: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 21:48:19.446: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:48:19.446: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:48:19.446: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 30 21:48:19.486: INFO: Updating stateful set ss2
Jan 30 21:48:19.501: INFO: Waiting for Pod statefulset-4461/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 30 21:48:29.514: INFO: Waiting for Pod statefulset-4461/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 30 21:48:39.572: INFO: Updating stateful set ss2
Jan 30 21:48:39.595: INFO: Waiting for StatefulSet statefulset-4461/ss2 to complete update
Jan 30 21:48:39.595: INFO: Waiting for Pod statefulset-4461/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 30 21:48:49.605: INFO: Waiting for StatefulSet statefulset-4461/ss2 to complete update
Jan 30 21:48:49.605: INFO: Waiting for Pod statefulset-4461/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 30 21:48:59.615: INFO: Deleting all statefulset in ns statefulset-4461
Jan 30 21:48:59.621: INFO: Scaling statefulset ss2 to 0
Jan 30 21:49:39.699: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 21:49:39.702: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:49:39.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4461" for this suite.

• [SLOW TEST:160.912 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":104,"skipped":1573,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:49:39.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 30 21:49:39.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3453'
Jan 30 21:49:40.030: INFO: stderr: ""
Jan 30 21:49:40.030: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Jan 30 21:49:40.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3453'
Jan 30 21:49:52.336: INFO: stderr: ""
Jan 30 21:49:52.336: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:49:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3453" for this suite.

• [SLOW TEST:12.600 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":105,"skipped":1601,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:49:52.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3295.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3295.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3295.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3295.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3295.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3295.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 21:50:02.610: INFO: DNS probes using dns-3295/dns-test-62c446e4-ff0d-409b-978b-491390b38235 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:50:02.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3295" for this suite.

• [SLOW TEST:10.407 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":106,"skipped":1610,"failed":0}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:50:02.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 30 21:50:02.977: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:50:04.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3873" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":107,"skipped":1616,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:50:04.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 21:50:04.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775" in namespace "downward-api-3351" to be "success or failure"
Jan 30 21:50:04.488: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 142.29929ms
Jan 30 21:50:06.502: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156399809s
Jan 30 21:50:08.509: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163356346s
Jan 30 21:50:10.514: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168936841s
Jan 30 21:50:12.570: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224099482s
Jan 30 21:50:14.582: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236622276s
Jan 30 21:50:16.599: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.253310218s
STEP: Saw pod success
Jan 30 21:50:16.599: INFO: Pod "downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775" satisfied condition "success or failure"
Jan 30 21:50:16.604: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775 container client-container: 
STEP: delete the pod
Jan 30 21:50:16.856: INFO: Waiting for pod downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775 to disappear
Jan 30 21:50:16.864: INFO: Pod downwardapi-volume-1cc8f65e-45fc-456f-9834-2b6899c24775 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:50:16.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3351" for this suite.

• [SLOW TEST:12.735 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1622,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:50:16.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jan 30 21:50:17.032: INFO: Waiting up to 5m0s for pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1" in namespace "containers-144" to be "success or failure"
Jan 30 21:50:17.117: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1": Phase="Pending", Reason="", readiness=false. Elapsed: 85.296829ms
Jan 30 21:50:19.124: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091850762s
Jan 30 21:50:21.129: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097248239s
Jan 30 21:50:23.136: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10379837s
Jan 30 21:50:25.142: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110025433s
STEP: Saw pod success
Jan 30 21:50:25.142: INFO: Pod "client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1" satisfied condition "success or failure"
Jan 30 21:50:25.147: INFO: Trying to get logs from node jerma-node pod client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1 container test-container: 
STEP: delete the pod
Jan 30 21:50:25.263: INFO: Waiting for pod client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1 to disappear
Jan 30 21:50:25.273: INFO: Pod client-containers-5d087ef4-d1d0-4536-ab0f-333b3169fad1 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:50:25.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-144" for this suite.

• [SLOW TEST:8.411 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1622,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:50:25.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-6885
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6885
STEP: creating replication controller externalsvc in namespace services-6885
I0130 21:50:25.699725       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6885, replica count: 2
I0130 21:50:28.750998       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:50:31.751723       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:50:34.752184       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 30 21:50:34.846: INFO: Creating new exec pod
Jan 30 21:50:42.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6885 execpodqfvbm -- /bin/sh -x -c nslookup nodeport-service'
Jan 30 21:50:45.337: INFO: stderr: "I0130 21:50:45.127743    1014 log.go:172] (0xc0000f4fd0) (0xc00062bea0) Create stream\nI0130 21:50:45.127977    1014 log.go:172] (0xc0000f4fd0) (0xc00062bea0) Stream added, broadcasting: 1\nI0130 21:50:45.135630    1014 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0130 21:50:45.135749    1014 log.go:172] (0xc0000f4fd0) (0xc000476780) Create stream\nI0130 21:50:45.135769    1014 log.go:172] (0xc0000f4fd0) (0xc000476780) Stream added, broadcasting: 3\nI0130 21:50:45.137229    1014 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0130 21:50:45.137262    1014 log.go:172] (0xc0000f4fd0) (0xc000547b80) Create stream\nI0130 21:50:45.137274    1014 log.go:172] (0xc0000f4fd0) (0xc000547b80) Stream added, broadcasting: 5\nI0130 21:50:45.138872    1014 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0130 21:50:45.228230    1014 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0130 21:50:45.228311    1014 log.go:172] (0xc000547b80) (5) Data frame handling\nI0130 21:50:45.228337    1014 log.go:172] (0xc000547b80) (5) Data frame sent\n+ nslookup nodeport-service\nI0130 21:50:45.241606    1014 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0130 21:50:45.241646    1014 log.go:172] (0xc000476780) (3) Data frame handling\nI0130 21:50:45.241671    1014 log.go:172] (0xc000476780) (3) Data frame sent\nI0130 21:50:45.241691    1014 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0130 21:50:45.241702    1014 log.go:172] (0xc000476780) (3) Data frame handling\nI0130 21:50:45.241744    1014 log.go:172] (0xc000476780) (3) Data frame sent\nI0130 21:50:45.321595    1014 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0130 21:50:45.321790    1014 log.go:172] (0xc0000f4fd0) (0xc000547b80) Stream removed, broadcasting: 5\nI0130 21:50:45.321852    1014 log.go:172] (0xc00062bea0) (1) Data frame handling\nI0130 21:50:45.321878    1014 log.go:172] (0xc00062bea0) (1) Data frame sent\nI0130 21:50:45.321908    1014 log.go:172] (0xc0000f4fd0) (0xc000476780) Stream removed, broadcasting: 3\nI0130 21:50:45.321949    1014 log.go:172] (0xc0000f4fd0) (0xc00062bea0) Stream removed, broadcasting: 1\nI0130 21:50:45.321980    1014 log.go:172] (0xc0000f4fd0) Go away received\nI0130 21:50:45.323015    1014 log.go:172] (0xc0000f4fd0) (0xc00062bea0) Stream removed, broadcasting: 1\nI0130 21:50:45.323048    1014 log.go:172] (0xc0000f4fd0) (0xc000476780) Stream removed, broadcasting: 3\nI0130 21:50:45.323060    1014 log.go:172] (0xc0000f4fd0) (0xc000547b80) Stream removed, broadcasting: 5\n"
Jan 30 21:50:45.337: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6885.svc.cluster.local\tcanonical name = externalsvc.services-6885.svc.cluster.local.\nName:\texternalsvc.services-6885.svc.cluster.local\nAddress: 10.96.175.114\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6885, will wait for the garbage collector to delete the pods
Jan 30 21:50:45.416: INFO: Deleting ReplicationController externalsvc took: 22.440044ms
Jan 30 21:50:45.716: INFO: Terminating ReplicationController externalsvc pods took: 300.397935ms
Jan 30 21:51:03.196: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:51:03.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6885" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:37.947 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":110,"skipped":1635,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:51:03.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 21:51:04.216: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 21:51:06.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:51:08.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:51:10.264: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716017864, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 21:51:13.267: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:51:13.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7874" for this suite.
STEP: Destroying namespace "webhook-7874-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.269 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":111,"skipped":1645,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:51:13.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 30 21:51:25.626: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:51:25.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-204" for this suite.

• [SLOW TEST:12.225 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1657,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:51:25.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4471, will wait for the garbage collector to delete the pods
Jan 30 21:51:35.953: INFO: Deleting Job.batch foo took: 8.00409ms
Jan 30 21:51:36.354: INFO: Terminating Job.batch foo pods took: 400.963031ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:52:22.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4471" for this suite.

• [SLOW TEST:56.672 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":113,"skipped":1664,"failed":0}
SSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:52:22.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 30 21:52:30.588: INFO: &Pod{ObjectMeta:{send-events-12bd4a44-496b-49d2-b5e2-e3dcf6d57301  events-2317 /api/v1/namespaces/events-2317/pods/send-events-12bd4a44-496b-49d2-b5e2-e3dcf6d57301 02c635ac-f342-49b8-bc66-60c3aa5b5da4 5375966 0 2020-01-30 21:52:22 +0000 UTC   map[name:foo time:493116759] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bfn49,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bfn49,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bfn49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-30 21:52:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://2e674efbe4a7f390027f82f3838c41e49e672e7ccc3045c8449e2e6921a71a21,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 30 21:52:32.600: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 30 21:52:34.609: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:52:34.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2317" for this suite.

• [SLOW TEST:12.229 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":114,"skipped":1671,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:52:34.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 21:52:34.688: INFO: Creating deployment "webserver-deployment"
Jan 30 21:52:34.693: INFO: Waiting for observed generation 1
Jan 30 21:52:36.816: INFO: Waiting for all required pods to come up
Jan 30 21:52:36.839: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 30 21:52:56.936: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 30 21:52:57.030: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 30 21:52:57.037: INFO: Updating deployment webserver-deployment
Jan 30 21:52:57.037: INFO: Waiting for observed generation 2
Jan 30 21:52:59.661: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 30 21:52:59.718: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 30 21:52:59.980: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 30 21:53:00.064: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 30 21:53:00.064: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 30 21:53:00.070: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 30 21:53:00.262: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 30 21:53:00.263: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 30 21:53:00.274: INFO: Updating deployment webserver-deployment
Jan 30 21:53:00.274: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 30 21:53:01.932: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 30 21:53:02.218: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 30 21:53:08.720: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-4276 /apis/apps/v1/namespaces/deployment-4276/deployments/webserver-deployment b7dc6c11-fa45-4287-a1d1-b13f58406a20 5376284 3 2020-01-30 21:52:34 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a74b18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-30 21:53:01 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-30 21:53:05 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 30 21:53:11.115: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-4276 /apis/apps/v1/namespaces/deployment-4276/replicasets/webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 5376283 3 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b7dc6c11-fa45-4287-a1d1-b13f58406a20 0xc000a75797 0xc000a75798}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a75808  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 21:53:11.115: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 30 21:53:11.115: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-4276 /apis/apps/v1/namespaces/deployment-4276/replicasets/webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 5376259 3 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b7dc6c11-fa45-4287-a1d1-b13f58406a20 0xc000a756d7 0xc000a756d8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000a75738  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 30 21:53:11.193: INFO: Pod "webserver-deployment-595b5b9587-2dtgv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2dtgv webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-2dtgv bed1a451-ff07-4916-b91c-1e016df2c8a9 5376269 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc000a75d17 0xc000a75d18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-30 21:53:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.193: INFO: Pod "webserver-deployment-595b5b9587-44ppx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-44ppx webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-44ppx 6bc2c425-a49a-4c99-b547-260a91f25a52 5376142 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc000a75e87 0xc000a75e88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-30 21:52:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://409e4ae81291d68f31b515f5d3eb1e3e89dec5b0962cec5bef5bc122c662303d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.193: INFO: Pod "webserver-deployment-595b5b9587-5sv56" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5sv56 webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-5sv56 14c110eb-fb08-45ba-9845-38a39dafbddc 5376254 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc000a75ff0 0xc000a75ff1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.194: INFO: Pod "webserver-deployment-595b5b9587-6cb5v" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6cb5v webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-6cb5v 28e75d17-de5c-42e0-ade3-b3ae59ff827b 5376298 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476107 0xc004476108}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-30 21:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.194: INFO: Pod "webserver-deployment-595b5b9587-6grvd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6grvd webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-6grvd 82ba5ce1-d3d9-4cdc-9fc1-d6dce479396d 5376136 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476277 0xc004476278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-30 21:52:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://b8c32ba824d9c3f40ebe37671a81c208cc30919f02523b49bd3f22f65c18b099,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.194: INFO: Pod "webserver-deployment-595b5b9587-bhgv8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bhgv8 webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-bhgv8 209af59c-5969-47b1-a457-cb6e226fcf47 5376109 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc0044763f0 0xc0044763f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-30 21:52:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cd896744418c16995510a03c57ec9c188f017923217873192afa66d40daa4e60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.194: INFO: Pod "webserver-deployment-595b5b9587-cbnhz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cbnhz webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-cbnhz 5fffc969-1bf8-4a81-948a-d84112aa84d4 5376117 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476560 0xc004476561}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-30 21:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d23067914ed43c05f71bf1ae9951ea6f428f0ba798d4d03d630f25e9142af5dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.195: INFO: Pod "webserver-deployment-595b5b9587-cmgzm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cmgzm webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-cmgzm d0685b77-d9f4-4d89-9687-e51530ca2f9e 5376285 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc0044766d0 0xc0044766d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-30 21:53:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.195: INFO: Pod "webserver-deployment-595b5b9587-cw74f" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cw74f webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-cw74f 7bdbf85e-d81c-4b1c-9248-f1139099e481 5376257 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476817 0xc004476818}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.195: INFO: Pod "webserver-deployment-595b5b9587-dbv7f" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dbv7f webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-dbv7f 9d5c054f-aec7-4c0a-9d4d-7068f330a644 5376119 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476927 0xc004476928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-30 21:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4ebad6b6e21a77a79f31655afbbcb353f697e5237fa198847df95de00ed8ab7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.195: INFO: Pod "webserver-deployment-595b5b9587-lzjhk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lzjhk webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-lzjhk c006a58f-d100-479a-86b6-708305fa7be8 5376256 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476aa0 0xc004476aa1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.195: INFO: Pod "webserver-deployment-595b5b9587-mr7z2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mr7z2 webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-mr7z2 b8c00edc-188f-49ba-9904-e35a49426d79 5376265 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476ba7 0xc004476ba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 21:53:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.196: INFO: Pod "webserver-deployment-595b5b9587-n9rzw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-n9rzw webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-n9rzw 15e5f1da-d8ea-4b84-827b-787d5c5df0c1 5376123 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476d07 0xc004476d08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-30 21:52:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5177ec600f534f98c41763e42a8656ee06590f1047365cbed25949066ed5bc08,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.196: INFO: Pod "webserver-deployment-595b5b9587-qgp9j" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qgp9j webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-qgp9j e1b37a85-6ca6-4033-8b55-ceda0746f480 5376229 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476e80 0xc004476e81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.196: INFO: Pod "webserver-deployment-595b5b9587-qp75z" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qp75z webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-qp75z e9754257-a984-40ce-9e21-d841c6d261f2 5376251 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004476f87 0xc004476f88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.196: INFO: Pod "webserver-deployment-595b5b9587-qsggr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qsggr webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-qsggr 976dd5de-26e8-4e4e-8a94-1d5e698197db 5376130 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004477097 0xc004477098}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-30 21:52:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6062472d2a7e3f35753ea15271b2bcea416cfa3977401c7d2134210e3473bf2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.197: INFO: Pod "webserver-deployment-595b5b9587-sqd79" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sqd79 webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-sqd79 4906142c-6b8e-4d99-9d6e-e107ae5c0d35 5376294 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004477370 0xc004477371}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 21:53:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.197: INFO: Pod "webserver-deployment-595b5b9587-wq6wj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wq6wj webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-wq6wj aebab9fe-a547-46a0-b719-b76e551fc8ae 5376106 0 2020-01-30 21:52:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc0044774c7 0xc0044774c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-01-30 21:52:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 21:52:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a61acbf651a51ac089a8f9968e2e36732fda2ba0ea33447b399858eec108e075,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.198: INFO: Pod "webserver-deployment-595b5b9587-xgj5s" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xgj5s webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-xgj5s dee74b9b-3355-4731-9a04-70dcd506f88a 5376237 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004477640 0xc004477641}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.198: INFO: Pod "webserver-deployment-595b5b9587-z8tk7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-z8tk7 webserver-deployment-595b5b9587- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-595b5b9587-z8tk7 129abbda-bb40-4719-95e5-c9258d7992d1 5376248 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 75d3b836-6891-4737-83ce-df5e27e309da 0xc004477757 0xc004477758}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.198: INFO: Pod "webserver-deployment-c7997dcc8-2z69q" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2z69q webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-2z69q c7d78c13-c25a-4c18-8340-66298fdbb226 5376258 0 2020-01-30 21:53:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477877 0xc004477878}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.198: INFO: Pod "webserver-deployment-c7997dcc8-5dc5t" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dc5t webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-5dc5t 705cdab9-3b78-4c2d-a75a-fa4a42bf04fd 5376166 0 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477997 0xc004477998}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-30 21:52:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.198: INFO: Pod "webserver-deployment-c7997dcc8-5wfp8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5wfp8 webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-5wfp8 16fe978f-39e5-4928-8742-3244515e4dfc 5376247 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477b07 0xc004477b08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.199: INFO: Pod "webserver-deployment-c7997dcc8-dzbd5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dzbd5 webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-dzbd5 02af3dfe-2303-408c-9af7-9e2c5eb52fb3 5376198 0 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477c37 0xc004477c38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-30 21:52:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.199: INFO: Pod "webserver-deployment-c7997dcc8-gxd9r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gxd9r webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-gxd9r e3897c79-961d-483a-b428-35260946bb30 5376272 0 2020-01-30 21:53:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477da7 0xc004477da8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.199: INFO: Pod "webserver-deployment-c7997dcc8-jtv77" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jtv77 webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-jtv77 ebb9675b-d150-49e0-a26c-3d46a3447c62 5376228 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477ec7 0xc004477ec8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.199: INFO: Pod "webserver-deployment-c7997dcc8-n8hqb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n8hqb webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-n8hqb cb3032a0-3fa2-4dd2-b89b-80855685c0fd 5376167 0 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc004477fe7 0xc004477fe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 21:52:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.199: INFO: Pod "webserver-deployment-c7997dcc8-q67l5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q67l5 webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-q67l5 6352478c-7dc5-4fa5-b9aa-6359e31b82ce 5376253 0 2020-01-30 21:53:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc0030861c7 0xc0030861c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.200: INFO: Pod "webserver-deployment-c7997dcc8-qmshk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qmshk webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-qmshk 89f23855-3b67-4c3c-8d40-86fce0190ae7 5376180 0 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc003086327 0xc003086328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 21:52:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.200: INFO: Pod "webserver-deployment-c7997dcc8-r868p" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r868p webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-r868p b66cf4ab-4cb6-4a8e-84dc-50ed30420cc0 5376266 0 2020-01-30 21:53:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc003086507 0xc003086508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.200: INFO: Pod "webserver-deployment-c7997dcc8-s4pqd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s4pqd webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-s4pqd 85a1a670-6b83-43fa-80c1-d3b0cb69722a 5376275 0 2020-01-30 21:53:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc003086747 0xc003086748}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.200: INFO: Pod "webserver-deployment-c7997dcc8-tljkc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tljkc webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-tljkc 926978e1-2e16-4ab2-b42b-4b8c8d5fea82 5376199 0 2020-01-30 21:52:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc003086877 0xc003086878}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:52:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-30 21:52:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 21:53:11.200: INFO: Pod "webserver-deployment-c7997dcc8-x9z9r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x9z9r webserver-deployment-c7997dcc8- deployment-4276 /api/v1/namespaces/deployment-4276/pods/webserver-deployment-c7997dcc8-x9z9r f8a48060-4f9e-46d4-a4f9-f3d1df6294de 5376270 0 2020-01-30 21:53:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 66f20c08-3be0-42a4-b10d-8144b4cb5bbf 0xc0030869f7 0xc0030869f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bxlzz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bxlzz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bxlzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 21:53:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:53:11.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4276" for this suite.

• [SLOW TEST:39.746 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":115,"skipped":1692,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:53:14.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:53:43.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3520" for this suite.

• [SLOW TEST:32.982 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":116,"skipped":1697,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:53:47.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:54:15.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7462" for this suite.

• [SLOW TEST:28.656 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":117,"skipped":1705,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:54:16.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 21:54:17.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02" in namespace "projected-9504" to be "success or failure"
Jan 30 21:54:17.904: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 416.989236ms
Jan 30 21:54:20.100: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612958899s
Jan 30 21:54:22.116: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.629612483s
Jan 30 21:54:24.123: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636098247s
Jan 30 21:54:26.130: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.642896368s
Jan 30 21:54:28.138: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651168259s
Jan 30 21:54:30.147: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 12.66032576s
Jan 30 21:54:32.160: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Pending", Reason="", readiness=false. Elapsed: 14.672905353s
Jan 30 21:54:34.165: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Running", Reason="", readiness=true. Elapsed: 16.677808657s
Jan 30 21:54:36.172: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.68512966s
STEP: Saw pod success
Jan 30 21:54:36.172: INFO: Pod "downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02" satisfied condition "success or failure"
Jan 30 21:54:36.176: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02 container client-container: 
STEP: delete the pod
Jan 30 21:54:36.702: INFO: Waiting for pod downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02 to disappear
Jan 30 21:54:36.708: INFO: Pod downwardapi-volume-4b78e626-f383-4be5-8f93-759c0ab49b02 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:54:36.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9504" for this suite.

• [SLOW TEST:20.694 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1706,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:54:36.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-c114bd5f-2658-4180-8553-2dd4df4ec273
STEP: Creating a pod to test consume configMaps
Jan 30 21:54:36.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66" in namespace "projected-8264" to be "success or failure"
Jan 30 21:54:36.869: INFO: Pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576264ms
Jan 30 21:54:38.877: INFO: Pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014068857s
Jan 30 21:54:40.883: INFO: Pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020722458s
Jan 30 21:54:42.912: INFO: Pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049195117s
STEP: Saw pod success
Jan 30 21:54:42.912: INFO: Pod "pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66" satisfied condition "success or failure"
Jan 30 21:54:42.917: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 21:54:43.011: INFO: Waiting for pod pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66 to disappear
Jan 30 21:54:43.024: INFO: Pod pod-projected-configmaps-cd7b807b-4942-484a-91c6-4a57666a2a66 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:54:43.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8264" for this suite.

• [SLOW TEST:6.310 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1737,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:54:43.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 30 21:54:43.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1250'
Jan 30 21:54:43.557: INFO: stderr: ""
Jan 30 21:54:43.557: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 21:54:43.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1250'
Jan 30 21:54:43.695: INFO: stderr: ""
Jan 30 21:54:43.695: INFO: stdout: "update-demo-nautilus-458tl update-demo-nautilus-p27sq "
Jan 30 21:54:43.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:43.980: INFO: stderr: ""
Jan 30 21:54:43.981: INFO: stdout: ""
Jan 30 21:54:43.981: INFO: update-demo-nautilus-458tl is created but not running
Jan 30 21:54:48.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1250'
Jan 30 21:54:49.160: INFO: stderr: ""
Jan 30 21:54:49.160: INFO: stdout: "update-demo-nautilus-458tl update-demo-nautilus-p27sq "
Jan 30 21:54:49.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:49.326: INFO: stderr: ""
Jan 30 21:54:49.327: INFO: stdout: ""
Jan 30 21:54:49.327: INFO: update-demo-nautilus-458tl is created but not running
Jan 30 21:54:54.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1250'
Jan 30 21:54:54.521: INFO: stderr: ""
Jan 30 21:54:54.521: INFO: stdout: "update-demo-nautilus-458tl update-demo-nautilus-p27sq "
Jan 30 21:54:54.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458tl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:54.644: INFO: stderr: ""
Jan 30 21:54:54.644: INFO: stdout: "true"
Jan 30 21:54:54.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-458tl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:54.748: INFO: stderr: ""
Jan 30 21:54:54.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 21:54:54.748: INFO: validating pod update-demo-nautilus-458tl
Jan 30 21:54:54.761: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 21:54:54.761: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 21:54:54.761: INFO: update-demo-nautilus-458tl is verified up and running
Jan 30 21:54:54.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p27sq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:54.885: INFO: stderr: ""
Jan 30 21:54:54.885: INFO: stdout: "true"
Jan 30 21:54:54.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p27sq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1250'
Jan 30 21:54:55.004: INFO: stderr: ""
Jan 30 21:54:55.004: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 21:54:55.004: INFO: validating pod update-demo-nautilus-p27sq
Jan 30 21:54:55.047: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 21:54:55.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 21:54:55.048: INFO: update-demo-nautilus-p27sq is verified up and running
STEP: using delete to clean up resources
Jan 30 21:54:55.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1250'
Jan 30 21:54:55.212: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 21:54:55.212: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 30 21:54:55.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1250'
Jan 30 21:54:55.346: INFO: stderr: "No resources found in kubectl-1250 namespace.\n"
Jan 30 21:54:55.346: INFO: stdout: ""
Jan 30 21:54:55.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1250 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 21:54:55.586: INFO: stderr: ""
Jan 30 21:54:55.586: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:54:55.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1250" for this suite.

• [SLOW TEST:12.560 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":120,"skipped":1740,"failed":0}
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:54:55.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-de549a1f-7419-4e2d-a565-4b56e950d11e
STEP: Creating secret with name secret-projected-all-test-volume-317085ec-dea9-4e6b-a1df-bd08be0b9035
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 30 21:54:55.792: INFO: Waiting up to 5m0s for pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f" in namespace "projected-2804" to be "success or failure"
Jan 30 21:54:55.803: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.720296ms
Jan 30 21:54:57.811: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018734988s
Jan 30 21:54:59.828: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03537796s
Jan 30 21:55:01.836: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043335727s
Jan 30 21:55:03.847: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054768676s
Jan 30 21:55:05.859: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066728762s
STEP: Saw pod success
Jan 30 21:55:05.859: INFO: Pod "projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f" satisfied condition "success or failure"
Jan 30 21:55:05.869: INFO: Trying to get logs from node jerma-node pod projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f container projected-all-volume-test: 
STEP: delete the pod
Jan 30 21:55:06.124: INFO: Waiting for pod projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f to disappear
Jan 30 21:55:06.174: INFO: Pod projected-volume-9c19f9c9-0bd6-497c-a02d-8893b67cee6f no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:55:06.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2804" for this suite.

• [SLOW TEST:10.587 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1744,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:55:06.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2504
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2504
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2504
Jan 30 21:55:06.448: INFO: Found 0 stateful pods, waiting for 1
Jan 30 21:55:16.456: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 30 21:55:16.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 21:55:16.921: INFO: stderr: "I0130 21:55:16.703926    1308 log.go:172] (0xc0009ace70) (0xc000aac640) Create stream\nI0130 21:55:16.704626    1308 log.go:172] (0xc0009ace70) (0xc000aac640) Stream added, broadcasting: 1\nI0130 21:55:16.717451    1308 log.go:172] (0xc0009ace70) Reply frame received for 1\nI0130 21:55:16.717644    1308 log.go:172] (0xc0009ace70) (0xc00064fc20) Create stream\nI0130 21:55:16.717672    1308 log.go:172] (0xc0009ace70) (0xc00064fc20) Stream added, broadcasting: 3\nI0130 21:55:16.720211    1308 log.go:172] (0xc0009ace70) Reply frame received for 3\nI0130 21:55:16.720264    1308 log.go:172] (0xc0009ace70) (0xc000594820) Create stream\nI0130 21:55:16.720284    1308 log.go:172] (0xc0009ace70) (0xc000594820) Stream added, broadcasting: 5\nI0130 21:55:16.721577    1308 log.go:172] (0xc0009ace70) Reply frame received for 5\nI0130 21:55:16.797556    1308 log.go:172] (0xc0009ace70) Data frame received for 5\nI0130 21:55:16.797642    1308 log.go:172] (0xc000594820) (5) Data frame handling\nI0130 21:55:16.797660    1308 log.go:172] (0xc000594820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:55:16.841065    1308 log.go:172] (0xc0009ace70) Data frame received for 3\nI0130 21:55:16.841099    1308 log.go:172] (0xc00064fc20) (3) Data frame handling\nI0130 21:55:16.841114    1308 log.go:172] (0xc00064fc20) (3) Data frame sent\nI0130 21:55:16.911813    1308 log.go:172] (0xc0009ace70) Data frame received for 1\nI0130 21:55:16.911930    1308 log.go:172] (0xc0009ace70) (0xc000594820) Stream removed, broadcasting: 5\nI0130 21:55:16.912019    1308 log.go:172] (0xc000aac640) (1) Data frame handling\nI0130 21:55:16.912042    1308 log.go:172] (0xc000aac640) (1) Data frame sent\nI0130 21:55:16.912056    1308 log.go:172] (0xc0009ace70) (0xc00064fc20) Stream removed, broadcasting: 3\nI0130 21:55:16.912100    1308 log.go:172] (0xc0009ace70) (0xc000aac640) Stream removed, broadcasting: 1\nI0130 21:55:16.912963    1308 log.go:172] (0xc0009ace70) (0xc000aac640) Stream removed, broadcasting: 1\nI0130 21:55:16.912987    1308 log.go:172] (0xc0009ace70) (0xc00064fc20) Stream removed, broadcasting: 3\nI0130 21:55:16.912991    1308 log.go:172] (0xc0009ace70) (0xc000594820) Stream removed, broadcasting: 5\nI0130 21:55:16.913334    1308 log.go:172] (0xc0009ace70) Go away received\n"
Jan 30 21:55:16.921: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 21:55:16.921: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 21:55:16.929: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 30 21:55:26.936: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 21:55:26.936: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 21:55:27.018: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999608s
Jan 30 21:55:28.028: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.92997941s
Jan 30 21:55:29.036: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.920585602s
Jan 30 21:55:30.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.912710702s
Jan 30 21:55:31.090: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.863995897s
Jan 30 21:55:32.097: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.858595493s
Jan 30 21:55:33.102: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.851973718s
Jan 30 21:55:34.114: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.846001092s
Jan 30 21:55:35.120: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.834404741s
Jan 30 21:55:36.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 828.223209ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2504
Jan 30 21:55:37.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 21:55:37.559: INFO: stderr: "I0130 21:55:37.394855    1326 log.go:172] (0xc0009a3810) (0xc000922820) Create stream\nI0130 21:55:37.395008    1326 log.go:172] (0xc0009a3810) (0xc000922820) Stream added, broadcasting: 1\nI0130 21:55:37.409837    1326 log.go:172] (0xc0009a3810) Reply frame received for 1\nI0130 21:55:37.409903    1326 log.go:172] (0xc0009a3810) (0xc000922000) Create stream\nI0130 21:55:37.409913    1326 log.go:172] (0xc0009a3810) (0xc000922000) Stream added, broadcasting: 3\nI0130 21:55:37.411772    1326 log.go:172] (0xc0009a3810) Reply frame received for 3\nI0130 21:55:37.411855    1326 log.go:172] (0xc0009a3810) (0xc0005046e0) Create stream\nI0130 21:55:37.411870    1326 log.go:172] (0xc0009a3810) (0xc0005046e0) Stream added, broadcasting: 5\nI0130 21:55:37.413754    1326 log.go:172] (0xc0009a3810) Reply frame received for 5\nI0130 21:55:37.477081    1326 log.go:172] (0xc0009a3810) Data frame received for 5\nI0130 21:55:37.477207    1326 log.go:172] (0xc0005046e0) (5) Data frame handling\nI0130 21:55:37.477249    1326 log.go:172] (0xc0005046e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:55:37.477305    1326 log.go:172] (0xc0009a3810) Data frame received for 3\nI0130 21:55:37.477335    1326 log.go:172] (0xc000922000) (3) Data frame handling\nI0130 21:55:37.477349    1326 log.go:172] (0xc000922000) (3) Data frame sent\nI0130 21:55:37.549740    1326 log.go:172] (0xc0009a3810) (0xc000922000) Stream removed, broadcasting: 3\nI0130 21:55:37.549844    1326 log.go:172] (0xc0009a3810) Data frame received for 1\nI0130 21:55:37.549892    1326 log.go:172] (0xc000922820) (1) Data frame handling\nI0130 21:55:37.549916    1326 log.go:172] (0xc000922820) (1) Data frame sent\nI0130 21:55:37.549945    1326 log.go:172] (0xc0009a3810) (0xc000922820) Stream removed, broadcasting: 1\nI0130 21:55:37.549968    1326 log.go:172] (0xc0009a3810) (0xc0005046e0) Stream removed, broadcasting: 5\nI0130 21:55:37.550020    1326 log.go:172] (0xc0009a3810) Go away received\nI0130 21:55:37.550968    1326 log.go:172] (0xc0009a3810) (0xc000922820) Stream removed, broadcasting: 1\nI0130 21:55:37.550986    1326 log.go:172] (0xc0009a3810) (0xc000922000) Stream removed, broadcasting: 3\nI0130 21:55:37.550992    1326 log.go:172] (0xc0009a3810) (0xc0005046e0) Stream removed, broadcasting: 5\n"
Jan 30 21:55:37.559: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 21:55:37.559: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 21:55:37.567: INFO: Found 1 stateful pods, waiting for 3
Jan 30 21:55:47.576: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:55:47.576: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:55:47.576: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 30 21:55:57.577: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:55:57.577: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 21:55:57.577: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 30 21:55:57.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 21:55:58.032: INFO: stderr: "I0130 21:55:57.866365    1348 log.go:172] (0xc000a90000) (0xc000a82000) Create stream\nI0130 21:55:57.866542    1348 log.go:172] (0xc000a90000) (0xc000a82000) Stream added, broadcasting: 1\nI0130 21:55:57.871131    1348 log.go:172] (0xc000a90000) Reply frame received for 1\nI0130 21:55:57.871272    1348 log.go:172] (0xc000a90000) (0xc000a820a0) Create stream\nI0130 21:55:57.871283    1348 log.go:172] (0xc000a90000) (0xc000a820a0) Stream added, broadcasting: 3\nI0130 21:55:57.873316    1348 log.go:172] (0xc000a90000) Reply frame received for 3\nI0130 21:55:57.873392    1348 log.go:172] (0xc000a90000) (0xc0009d40a0) Create stream\nI0130 21:55:57.873402    1348 log.go:172] (0xc000a90000) (0xc0009d40a0) Stream added, broadcasting: 5\nI0130 21:55:57.875760    1348 log.go:172] (0xc000a90000) Reply frame received for 5\nI0130 21:55:57.960677    1348 log.go:172] (0xc000a90000) Data frame received for 3\nI0130 21:55:57.960797    1348 log.go:172] (0xc000a820a0) (3) Data frame handling\nI0130 21:55:57.960824    1348 log.go:172] (0xc000a820a0) (3) Data frame sent\nI0130 21:55:57.960889    1348 log.go:172] (0xc000a90000) Data frame received for 5\nI0130 21:55:57.960909    1348 log.go:172] (0xc0009d40a0) (5) Data frame handling\nI0130 21:55:57.960921    1348 log.go:172] (0xc0009d40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:55:58.017484    1348 log.go:172] (0xc000a90000) Data frame received for 1\nI0130 21:55:58.017581    1348 log.go:172] (0xc000a90000) (0xc000a820a0) Stream removed, broadcasting: 3\nI0130 21:55:58.017633    1348 log.go:172] (0xc000a82000) (1) Data frame handling\nI0130 21:55:58.017649    1348 log.go:172] (0xc000a82000) (1) Data frame sent\nI0130 21:55:58.017660    1348 log.go:172] (0xc000a90000) (0xc000a82000) Stream removed, broadcasting: 1\nI0130 21:55:58.017669    1348 log.go:172] (0xc000a90000) (0xc0009d40a0) Stream removed, broadcasting: 5\nI0130 21:55:58.018304    1348 log.go:172] (0xc000a90000) (0xc000a82000) Stream removed, broadcasting: 1\nI0130 21:55:58.018317    1348 log.go:172] (0xc000a90000) (0xc000a820a0) Stream removed, broadcasting: 3\nI0130 21:55:58.018323    1348 log.go:172] (0xc000a90000) (0xc0009d40a0) Stream removed, broadcasting: 5\n"
Jan 30 21:55:58.032: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 21:55:58.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 21:55:58.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 21:55:58.486: INFO: stderr: "I0130 21:55:58.232021    1367 log.go:172] (0xc0004ca0b0) (0xc000a60d20) Create stream\nI0130 21:55:58.232247    1367 log.go:172] (0xc0004ca0b0) (0xc000a60d20) Stream added, broadcasting: 1\nI0130 21:55:58.236074    1367 log.go:172] (0xc0004ca0b0) Reply frame received for 1\nI0130 21:55:58.236105    1367 log.go:172] (0xc0004ca0b0) (0xc000ac6a00) Create stream\nI0130 21:55:58.236116    1367 log.go:172] (0xc0004ca0b0) (0xc000ac6a00) Stream added, broadcasting: 3\nI0130 21:55:58.237117    1367 log.go:172] (0xc0004ca0b0) Reply frame received for 3\nI0130 21:55:58.237166    1367 log.go:172] (0xc0004ca0b0) (0xc000a60dc0) Create stream\nI0130 21:55:58.237175    1367 log.go:172] (0xc0004ca0b0) (0xc000a60dc0) Stream added, broadcasting: 5\nI0130 21:55:58.238273    1367 log.go:172] (0xc0004ca0b0) Reply frame received for 5\nI0130 21:55:58.337100    1367 log.go:172] (0xc0004ca0b0) Data frame received for 5\nI0130 21:55:58.337883    1367 log.go:172] (0xc000a60dc0) (5) Data frame handling\nI0130 21:55:58.338042    1367 log.go:172] (0xc000a60dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:55:58.363052    1367 log.go:172] (0xc0004ca0b0) Data frame received for 3\nI0130 21:55:58.363113    1367 log.go:172] (0xc000ac6a00) (3) Data frame handling\nI0130 21:55:58.363148    1367 log.go:172] (0xc000ac6a00) (3) Data frame sent\nI0130 21:55:58.464084    1367 log.go:172] (0xc0004ca0b0) Data frame received for 1\nI0130 21:55:58.464201    1367 log.go:172] (0xc000a60d20) (1) Data frame handling\nI0130 21:55:58.464230    1367 log.go:172] (0xc000a60d20) (1) Data frame sent\nI0130 21:55:58.464257    1367 log.go:172] (0xc0004ca0b0) (0xc000a60d20) Stream removed, broadcasting: 1\nI0130 21:55:58.465120    1367 log.go:172] (0xc0004ca0b0) (0xc000ac6a00) Stream removed, broadcasting: 3\nI0130 21:55:58.466151    1367 log.go:172] (0xc0004ca0b0) (0xc000a60dc0) Stream removed, broadcasting: 5\nI0130 21:55:58.466244    1367 log.go:172] (0xc0004ca0b0) (0xc000a60d20) Stream removed, broadcasting: 1\nI0130 21:55:58.466258    1367 log.go:172] (0xc0004ca0b0) (0xc000ac6a00) Stream removed, broadcasting: 3\nI0130 21:55:58.466268    1367 log.go:172] (0xc0004ca0b0) (0xc000a60dc0) Stream removed, broadcasting: 5\n"
Jan 30 21:55:58.487: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 21:55:58.487: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 21:55:58.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 21:55:58.983: INFO: stderr: "I0130 21:55:58.778984    1388 log.go:172] (0xc000111340) (0xc0006f1ea0) Create stream\nI0130 21:55:58.779364    1388 log.go:172] (0xc000111340) (0xc0006f1ea0) Stream added, broadcasting: 1\nI0130 21:55:58.782114    1388 log.go:172] (0xc000111340) Reply frame received for 1\nI0130 21:55:58.782186    1388 log.go:172] (0xc000111340) (0xc0002ad4a0) Create stream\nI0130 21:55:58.782198    1388 log.go:172] (0xc000111340) (0xc0002ad4a0) Stream added, broadcasting: 3\nI0130 21:55:58.783676    1388 log.go:172] (0xc000111340) Reply frame received for 3\nI0130 21:55:58.783700    1388 log.go:172] (0xc000111340) (0xc0006946e0) Create stream\nI0130 21:55:58.783711    1388 log.go:172] (0xc000111340) (0xc0006946e0) Stream added, broadcasting: 5\nI0130 21:55:58.785245    1388 log.go:172] (0xc000111340) Reply frame received for 5\nI0130 21:55:58.868700    1388 log.go:172] (0xc000111340) Data frame received for 5\nI0130 21:55:58.868825    1388 log.go:172] (0xc0006946e0) (5) Data frame handling\nI0130 21:55:58.868853    1388 log.go:172] (0xc0006946e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 21:55:58.900074    1388 log.go:172] (0xc000111340) Data frame received for 3\nI0130 21:55:58.900236    1388 log.go:172] (0xc0002ad4a0) (3) Data frame handling\nI0130 21:55:58.900313    1388 log.go:172] (0xc0002ad4a0) (3) Data frame sent\nI0130 21:55:58.972660    1388 log.go:172] (0xc000111340) Data frame received for 1\nI0130 21:55:58.972812    1388 log.go:172] (0xc000111340) (0xc0002ad4a0) Stream removed, broadcasting: 3\nI0130 21:55:58.972860    1388 log.go:172] (0xc0006f1ea0) (1) Data frame handling\nI0130 21:55:58.972885    1388 log.go:172] (0xc0006f1ea0) (1) Data frame sent\nI0130 21:55:58.972914    1388 log.go:172] (0xc000111340) (0xc0006946e0) Stream removed, broadcasting: 5\nI0130 21:55:58.972947    1388 log.go:172] (0xc000111340) (0xc0006f1ea0) Stream removed, broadcasting: 1\nI0130 21:55:58.973004    1388 log.go:172] (0xc000111340) Go away received\nI0130 21:55:58.974492    1388 log.go:172] (0xc000111340) (0xc0006f1ea0) Stream removed, broadcasting: 1\nI0130 21:55:58.974522    1388 log.go:172] (0xc000111340) (0xc0002ad4a0) Stream removed, broadcasting: 3\nI0130 21:55:58.974535    1388 log.go:172] (0xc000111340) (0xc0006946e0) Stream removed, broadcasting: 5\n"
Jan 30 21:55:58.984: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 21:55:58.984: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 21:55:58.984: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 21:55:58.988: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 30 21:56:08.997: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 21:56:08.998: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 21:56:08.998: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 21:56:09.018: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999671s
Jan 30 21:56:10.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986369571s
Jan 30 21:56:11.039: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969880018s
Jan 30 21:56:12.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964750476s
Jan 30 21:56:13.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.956704692s
Jan 30 21:56:14.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9500859s
Jan 30 21:56:15.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.455477716s
Jan 30 21:56:16.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.449144861s
Jan 30 21:56:17.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.43707249s
Jan 30 21:56:18.597: INFO: Verifying statefulset ss doesn't scale past 3 for another 421.126369ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2504
Jan 30 21:56:19.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 21:56:20.214: INFO: stderr: "I0130 21:56:20.025074    1408 log.go:172] (0xc000b5b290) (0xc000952320) Create stream\nI0130 21:56:20.025355    1408 log.go:172] (0xc000b5b290) (0xc000952320) Stream added, broadcasting: 1\nI0130 21:56:20.042495    1408 log.go:172] (0xc000b5b290) Reply frame received for 1\nI0130 21:56:20.042701    1408 log.go:172] (0xc000b5b290) (0xc000928000) Create stream\nI0130 21:56:20.042733    1408 log.go:172] (0xc000b5b290) (0xc000928000) Stream added, broadcasting: 3\nI0130 21:56:20.045604    1408 log.go:172] (0xc000b5b290) Reply frame received for 3\nI0130 21:56:20.045781    1408 log.go:172] (0xc000b5b290) (0xc000952000) Create stream\nI0130 21:56:20.045822    1408 log.go:172] (0xc000b5b290) (0xc000952000) Stream added, broadcasting: 5\nI0130 21:56:20.047651    1408 log.go:172] (0xc000b5b290) Reply frame received for 5\nI0130 21:56:20.117585    1408 log.go:172] (0xc000b5b290) Data frame received for 5\nI0130 21:56:20.117668    1408 log.go:172] (0xc000952000) (5) Data frame handling\nI0130 21:56:20.117732    1408 log.go:172] (0xc000952000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:56:20.118428    1408 log.go:172] (0xc000b5b290) Data frame received for 3\nI0130 21:56:20.118443    1408 log.go:172] (0xc000928000) (3) Data frame handling\nI0130 21:56:20.118454    1408 log.go:172] (0xc000928000) (3) Data frame sent\nI0130 21:56:20.197855    1408 log.go:172] (0xc000b5b290) Data frame received for 1\nI0130 21:56:20.197957    1408 log.go:172] (0xc000b5b290) (0xc000952000) Stream removed, broadcasting: 5\nI0130 21:56:20.198007    1408 log.go:172] (0xc000952320) (1) Data frame handling\nI0130 21:56:20.198033    1408 log.go:172] (0xc000952320) (1) Data frame sent\nI0130 21:56:20.198071    1408 log.go:172] (0xc000b5b290) (0xc000928000) Stream removed, broadcasting: 3\nI0130 21:56:20.198100    1408 log.go:172] (0xc000b5b290) (0xc000952320) Stream removed, broadcasting: 1\nI0130 21:56:20.198120    1408 log.go:172] (0xc000b5b290) Go away received\nI0130 21:56:20.199657    1408 log.go:172] (0xc000b5b290) (0xc000952320) Stream removed, broadcasting: 1\nI0130 21:56:20.199747    1408 log.go:172] (0xc000b5b290) (0xc000928000) Stream removed, broadcasting: 3\nI0130 21:56:20.199767    1408 log.go:172] (0xc000b5b290) (0xc000952000) Stream removed, broadcasting: 5\n"
Jan 30 21:56:20.214: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 21:56:20.214: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 21:56:20.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 21:56:20.545: INFO: stderr: "I0130 21:56:20.376386    1430 log.go:172] (0xc000116bb0) (0xc00059bae0) Create stream\nI0130 21:56:20.376619    1430 log.go:172] (0xc000116bb0) (0xc00059bae0) Stream added, broadcasting: 1\nI0130 21:56:20.380069    1430 log.go:172] (0xc000116bb0) Reply frame received for 1\nI0130 21:56:20.380123    1430 log.go:172] (0xc000116bb0) (0xc0006cc000) Create stream\nI0130 21:56:20.380138    1430 log.go:172] (0xc000116bb0) (0xc0006cc000) Stream added, broadcasting: 3\nI0130 21:56:20.381512    1430 log.go:172] (0xc000116bb0) Reply frame received for 3\nI0130 21:56:20.381535    1430 log.go:172] (0xc000116bb0) (0xc00059bcc0) Create stream\nI0130 21:56:20.381546    1430 log.go:172] (0xc000116bb0) (0xc00059bcc0) Stream added, broadcasting: 5\nI0130 21:56:20.382594    1430 log.go:172] (0xc000116bb0) Reply frame received for 5\nI0130 21:56:20.445110    1430 log.go:172] (0xc000116bb0) Data frame received for 5\nI0130 21:56:20.445266    1430 log.go:172] (0xc00059bcc0) (5) Data frame handling\nI0130 21:56:20.445306    1430 log.go:172] (0xc00059bcc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:56:20.445404    1430 log.go:172] (0xc000116bb0) Data frame received for 3\nI0130 21:56:20.445446    1430 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0130 21:56:20.445465    1430 log.go:172] (0xc0006cc000) (3) Data frame sent\nI0130 21:56:20.531350    1430 log.go:172] (0xc000116bb0) Data frame received for 1\nI0130 21:56:20.531452    1430 log.go:172] (0xc000116bb0) (0xc0006cc000) Stream removed, broadcasting: 3\nI0130 21:56:20.531710    1430 log.go:172] (0xc00059bae0) (1) Data frame handling\nI0130 21:56:20.531751    1430 log.go:172] (0xc00059bae0) (1) Data frame sent\nI0130 21:56:20.531799    1430 log.go:172] (0xc000116bb0) (0xc00059bcc0) Stream removed, broadcasting: 5\nI0130 21:56:20.531834    1430 log.go:172] (0xc000116bb0) (0xc00059bae0) Stream removed, broadcasting: 1\nI0130 21:56:20.532132    1430 log.go:172] (0xc000116bb0) Go away received\nI0130 21:56:20.532718    1430 log.go:172] (0xc000116bb0) (0xc00059bae0) Stream removed, broadcasting: 1\nI0130 21:56:20.532741    1430 log.go:172] (0xc000116bb0) (0xc0006cc000) Stream removed, broadcasting: 3\nI0130 21:56:20.532755    1430 log.go:172] (0xc000116bb0) (0xc00059bcc0) Stream removed, broadcasting: 5\n"
Jan 30 21:56:20.545: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 21:56:20.545: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 21:56:20.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 21:56:21.029: INFO: stderr: "I0130 21:56:20.818434    1451 log.go:172] (0xc000b4a580) (0xc000615d60) Create stream\nI0130 21:56:20.818662    1451 log.go:172] (0xc000b4a580) (0xc000615d60) Stream added, broadcasting: 1\nI0130 21:56:20.830743    1451 log.go:172] (0xc000b4a580) Reply frame received for 1\nI0130 21:56:20.830885    1451 log.go:172] (0xc000b4a580) (0xc000ad6320) Create stream\nI0130 21:56:20.830899    1451 log.go:172] (0xc000b4a580) (0xc000ad6320) Stream added, broadcasting: 3\nI0130 21:56:20.832302    1451 log.go:172] (0xc000b4a580) Reply frame received for 3\nI0130 21:56:20.832336    1451 log.go:172] (0xc000b4a580) (0xc000ad63c0) Create stream\nI0130 21:56:20.832346    1451 log.go:172] (0xc000b4a580) (0xc000ad63c0) Stream added, broadcasting: 5\nI0130 21:56:20.834498    1451 log.go:172] (0xc000b4a580) Reply frame received for 5\nI0130 21:56:20.923500    1451 log.go:172] (0xc000b4a580) Data frame received for 3\nI0130 21:56:20.923553    1451 log.go:172] (0xc000ad6320) (3) Data frame handling\nI0130 21:56:20.923572    1451 log.go:172] (0xc000ad6320) (3) Data frame sent\nI0130 21:56:20.923596    1451 log.go:172] (0xc000b4a580) Data frame received for 5\nI0130 21:56:20.923605    1451 log.go:172] (0xc000ad63c0) (5) Data frame handling\nI0130 21:56:20.923621    1451 log.go:172] (0xc000ad63c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 21:56:21.000511    1451 log.go:172] (0xc000b4a580) Data frame received for 1\nI0130 21:56:21.000594    1451 log.go:172] (0xc000615d60) (1) Data frame handling\nI0130 21:56:21.000618    1451 log.go:172] (0xc000615d60) (1) Data frame sent\nI0130 21:56:21.000646    1451 log.go:172] (0xc000b4a580) (0xc000615d60) Stream removed, broadcasting: 1\nI0130 21:56:21.008555    1451 log.go:172] (0xc000b4a580) (0xc000ad6320) Stream removed, broadcasting: 3\nI0130 21:56:21.008599    1451 log.go:172] (0xc000b4a580) (0xc000ad63c0) Stream removed, broadcasting: 5\nI0130 21:56:21.008622    1451 log.go:172] (0xc000b4a580) (0xc000615d60) Stream removed, broadcasting: 1\nI0130 21:56:21.008630    1451 log.go:172] (0xc000b4a580) (0xc000ad6320) Stream removed, broadcasting: 3\nI0130 21:56:21.008639    1451 log.go:172] (0xc000b4a580) (0xc000ad63c0) Stream removed, broadcasting: 5\nI0130 21:56:21.008726    1451 log.go:172] (0xc000b4a580) Go away received\n"
Jan 30 21:56:21.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 21:56:21.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 21:56:21.030: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 30 21:57:01.063: INFO: Deleting all statefulset in ns statefulset-2504
Jan 30 21:57:01.067: INFO: Scaling statefulset ss to 0
Jan 30 21:57:01.081: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 21:57:01.085: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:57:01.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2504" for this suite.

• [SLOW TEST:114.935 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":122,"skipped":1759,"failed":0}
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:57:01.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-qqhgr in namespace proxy-4051
I0130 21:57:01.259297       8 runners.go:189] Created replication controller with name: proxy-service-qqhgr, namespace: proxy-4051, replica count: 1
I0130 21:57:02.310478       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:03.311152       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:04.311581       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:05.312303       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:06.312847       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:07.313356       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 21:57:08.314299       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 21:57:09.314951       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 21:57:10.315298       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 21:57:11.315703       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0130 21:57:12.316262       8 runners.go:189] proxy-service-qqhgr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 21:57:12.335: INFO: setup took 11.108642709s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 30 21:57:12.363: INFO: (0) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 27.593299ms)
Jan 30 21:57:12.364: INFO: (0) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 28.224701ms)
Jan 30 21:57:12.364: INFO: (0) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 28.693216ms)
Jan 30 21:57:12.365: INFO: (0) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 29.554859ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 31.306444ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 30.890305ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 31.006471ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 31.021981ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 30.799323ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 31.303393ms)
Jan 30 21:57:12.367: INFO: (0) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 31.367685ms)
Jan 30 21:57:12.371: INFO: (0) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 35.230359ms)
Jan 30 21:57:12.371: INFO: (0) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 35.290214ms)
Jan 30 21:57:12.371: INFO: (0) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 35.229034ms)
Jan 30 21:57:12.371: INFO: (0) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 11.298659ms)
Jan 30 21:57:12.386: INFO: (1) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 14.111611ms)
Jan 30 21:57:12.387: INFO: (1) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 15.043945ms)
Jan 30 21:57:12.387: INFO: (1) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 14.809313ms)
Jan 30 21:57:12.387: INFO: (1) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 15.241812ms)
Jan 30 21:57:12.387: INFO: (1) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 14.947667ms)
Jan 30 21:57:12.387: INFO: (1) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 15.391711ms)
Jan 30 21:57:12.388: INFO: (1) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 15.221349ms)
Jan 30 21:57:12.389: INFO: (1) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 16.484131ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 25.799062ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 26.645654ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 26.825876ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 26.552862ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 26.302882ms)
Jan 30 21:57:12.416: INFO: (2) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 26.445487ms)
Jan 30 21:57:12.418: INFO: (2) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 28.667985ms)
Jan 30 21:57:12.418: INFO: (2) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 28.651647ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 29.23351ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 29.819984ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 29.602103ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 30.023216ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 29.609347ms)
Jan 30 21:57:12.419: INFO: (2) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 30.003189ms)
Jan 30 21:57:12.460: INFO: (3) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 40.064556ms)
Jan 30 21:57:12.460: INFO: (3) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 40.45893ms)
Jan 30 21:57:12.461: INFO: (3) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 40.553933ms)
Jan 30 21:57:12.463: INFO: (3) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 43.031666ms)
Jan 30 21:57:12.463: INFO: (3) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 42.840897ms)
Jan 30 21:57:12.463: INFO: (3) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 43.003339ms)
Jan 30 21:57:12.463: INFO: (3) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 42.684171ms)
Jan 30 21:57:12.463: INFO: (3) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 43.109761ms)
Jan 30 21:57:12.466: INFO: (3) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 45.540398ms)
Jan 30 21:57:12.466: INFO: (3) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 45.58859ms)
Jan 30 21:57:12.466: INFO: (3) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 45.662592ms)
Jan 30 21:57:12.467: INFO: (3) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 46.609951ms)
Jan 30 21:57:12.468: INFO: (3) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 48.178485ms)
Jan 30 21:57:12.468: INFO: (3) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 48.306176ms)
Jan 30 21:57:12.468: INFO: (3) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 48.059974ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 23.890252ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 24.153207ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 24.272169ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 24.160937ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 24.346297ms)
Jan 30 21:57:12.493: INFO: (4) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 24.79383ms)
Jan 30 21:57:12.495: INFO: (4) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 27.15072ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 27.351074ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 27.437903ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 27.494539ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 27.528796ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 27.452139ms)
Jan 30 21:57:12.496: INFO: (4) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 14.205977ms)
Jan 30 21:57:12.513: INFO: (5) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 15.134487ms)
Jan 30 21:57:12.513: INFO: (5) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 14.716826ms)
Jan 30 21:57:12.513: INFO: (5) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 14.835946ms)
Jan 30 21:57:12.514: INFO: (5) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 16.23295ms)
Jan 30 21:57:12.515: INFO: (5) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 16.546037ms)
Jan 30 21:57:12.515: INFO: (5) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 16.499503ms)
Jan 30 21:57:12.515: INFO: (5) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 16.810276ms)
Jan 30 21:57:12.526: INFO: (6) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 10.307834ms)
Jan 30 21:57:12.526: INFO: (6) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 10.446111ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 11.235499ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 11.63321ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 11.799052ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 11.507546ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 11.530636ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 11.569641ms)
Jan 30 21:57:12.527: INFO: (6) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 11.70806ms)
Jan 30 21:57:12.528: INFO: (6) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 12.934143ms)
Jan 30 21:57:12.528: INFO: (6) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 13.388268ms)
Jan 30 21:57:12.528: INFO: (6) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 13.185891ms)
Jan 30 21:57:12.529: INFO: (6) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 13.475712ms)
Jan 30 21:57:12.529: INFO: (6) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 13.417144ms)
Jan 30 21:57:12.530: INFO: (6) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 14.730495ms)
Jan 30 21:57:12.533: INFO: (7) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 2.887865ms)
Jan 30 21:57:12.535: INFO: (7) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 5.22738ms)
Jan 30 21:57:12.540: INFO: (7) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 7.922275ms)
Jan 30 21:57:12.540: INFO: (7) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 9.473261ms)
Jan 30 21:57:12.540: INFO: (7) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 8.754271ms)
Jan 30 21:57:12.540: INFO: (7) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 8.756143ms)
Jan 30 21:57:12.541: INFO: (7) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: ... (200; 8.91462ms)
Jan 30 21:57:12.541: INFO: (7) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 9.122506ms)
Jan 30 21:57:12.543: INFO: (7) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 11.466148ms)
Jan 30 21:57:12.543: INFO: (7) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 11.853383ms)
Jan 30 21:57:12.543: INFO: (7) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 11.339633ms)
Jan 30 21:57:12.543: INFO: (7) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 11.152868ms)
Jan 30 21:57:12.543: INFO: (7) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 11.464619ms)
Jan 30 21:57:12.555: INFO: (8) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: ... (200; 11.630276ms)
Jan 30 21:57:12.555: INFO: (8) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 11.661312ms)
Jan 30 21:57:12.555: INFO: (8) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 11.946899ms)
Jan 30 21:57:12.555: INFO: (8) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 12.262409ms)
Jan 30 21:57:12.560: INFO: (8) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 16.614076ms)
Jan 30 21:57:12.560: INFO: (8) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 16.951166ms)
Jan 30 21:57:12.561: INFO: (8) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 17.577454ms)
Jan 30 21:57:12.561: INFO: (8) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 17.886549ms)
Jan 30 21:57:12.561: INFO: (8) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 17.806665ms)
Jan 30 21:57:12.561: INFO: (8) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 18.010677ms)
Jan 30 21:57:12.561: INFO: (8) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 18.013331ms)
Jan 30 21:57:12.562: INFO: (8) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 18.311451ms)
Jan 30 21:57:12.562: INFO: (8) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 18.295763ms)
Jan 30 21:57:12.562: INFO: (8) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 18.911532ms)
Jan 30 21:57:12.576: INFO: (9) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 12.952824ms)
Jan 30 21:57:12.577: INFO: (9) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 13.973562ms)
Jan 30 21:57:12.577: INFO: (9) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 14.380223ms)
Jan 30 21:57:12.577: INFO: (9) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 14.840452ms)
Jan 30 21:57:12.578: INFO: (9) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 15.018793ms)
Jan 30 21:57:12.578: INFO: (9) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 14.965307ms)
Jan 30 21:57:12.578: INFO: (9) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 15.420575ms)
Jan 30 21:57:12.578: INFO: (9) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 15.253621ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 16.880455ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 17.516013ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 17.222373ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 17.160862ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 17.747991ms)
Jan 30 21:57:12.580: INFO: (9) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 17.820061ms)
Jan 30 21:57:12.588: INFO: (10) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 7.621421ms)
Jan 30 21:57:12.590: INFO: (10) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 9.192057ms)
Jan 30 21:57:12.591: INFO: (10) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 9.996545ms)
Jan 30 21:57:12.594: INFO: (10) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 13.288573ms)
Jan 30 21:57:12.595: INFO: (10) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 13.613536ms)
Jan 30 21:57:12.595: INFO: (10) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 13.704921ms)
Jan 30 21:57:12.595: INFO: (10) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 14.04214ms)
Jan 30 21:57:12.595: INFO: (10) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 13.834905ms)
Jan 30 21:57:12.597: INFO: (10) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 15.65134ms)
Jan 30 21:57:12.597: INFO: (10) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 15.885487ms)
Jan 30 21:57:12.597: INFO: (10) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 16.460754ms)
Jan 30 21:57:12.598: INFO: (10) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 16.463915ms)
Jan 30 21:57:12.598: INFO: (10) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 16.960135ms)
Jan 30 21:57:12.608: INFO: (11) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 9.978048ms)
Jan 30 21:57:12.609: INFO: (11) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 10.889996ms)
Jan 30 21:57:12.612: INFO: (11) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 13.754014ms)
Jan 30 21:57:12.612: INFO: (11) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 13.458918ms)
Jan 30 21:57:12.612: INFO: (11) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 13.482492ms)
Jan 30 21:57:12.612: INFO: (11) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 14.103631ms)
Jan 30 21:57:12.614: INFO: (11) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 16.435392ms)
Jan 30 21:57:12.614: INFO: (11) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 16.024622ms)
Jan 30 21:57:12.620: INFO: (11) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 21.544308ms)
Jan 30 21:57:12.620: INFO: (11) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 22.149668ms)
Jan 30 21:57:12.620: INFO: (11) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 21.977718ms)
Jan 30 21:57:12.620: INFO: (11) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 22.10527ms)
Jan 30 21:57:12.621: INFO: (11) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 22.463837ms)
Jan 30 21:57:12.636: INFO: (12) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 14.904221ms)
Jan 30 21:57:12.639: INFO: (12) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 18.490734ms)
Jan 30 21:57:12.640: INFO: (12) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 18.04812ms)
Jan 30 21:57:12.640: INFO: (12) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 18.355229ms)
Jan 30 21:57:12.641: INFO: (12) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 19.013249ms)
Jan 30 21:57:12.641: INFO: (12) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 20.103621ms)
Jan 30 21:57:12.641: INFO: (12) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 19.606104ms)
Jan 30 21:57:12.641: INFO: (12) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: ... (200; 8.934382ms)
Jan 30 21:57:12.653: INFO: (13) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 9.092432ms)
Jan 30 21:57:12.653: INFO: (13) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 9.174616ms)
Jan 30 21:57:12.654: INFO: (13) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 9.829734ms)
Jan 30 21:57:12.655: INFO: (13) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 10.591236ms)
Jan 30 21:57:12.655: INFO: (13) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 10.686086ms)
Jan 30 21:57:12.655: INFO: (13) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 10.600276ms)
Jan 30 21:57:12.655: INFO: (13) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 11.098009ms)
Jan 30 21:57:12.655: INFO: (13) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 11.108417ms)
Jan 30 21:57:12.657: INFO: (13) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 12.594132ms)
Jan 30 21:57:12.657: INFO: (13) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 12.709815ms)
Jan 30 21:57:12.658: INFO: (13) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 7.766587ms)
Jan 30 21:57:12.669: INFO: (14) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 8.170443ms)
Jan 30 21:57:12.669: INFO: (14) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 8.489032ms)
Jan 30 21:57:12.669: INFO: (14) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 8.36653ms)
Jan 30 21:57:12.670: INFO: (14) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 9.320469ms)
Jan 30 21:57:12.670: INFO: (14) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 9.695415ms)
Jan 30 21:57:12.671: INFO: (14) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 10.878395ms)
Jan 30 21:57:12.671: INFO: (14) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 10.728123ms)
Jan 30 21:57:12.671: INFO: (14) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 11.273837ms)
Jan 30 21:57:12.671: INFO: (14) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 10.839751ms)
Jan 30 21:57:12.671: INFO: (14) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 10.43867ms)
Jan 30 21:57:12.684: INFO: (15) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 10.672305ms)
Jan 30 21:57:12.684: INFO: (15) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 10.902576ms)
Jan 30 21:57:12.684: INFO: (15) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: ... (200; 11.616837ms)
Jan 30 21:57:12.686: INFO: (15) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 12.32609ms)
Jan 30 21:57:12.686: INFO: (15) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 12.674337ms)
Jan 30 21:57:12.686: INFO: (15) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 12.665446ms)
Jan 30 21:57:12.686: INFO: (15) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 12.858292ms)
Jan 30 21:57:12.687: INFO: (15) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 13.58718ms)
Jan 30 21:57:12.687: INFO: (15) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 13.590365ms)
Jan 30 21:57:12.687: INFO: (15) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 13.603991ms)
Jan 30 21:57:12.690: INFO: (16) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 3.491718ms)
Jan 30 21:57:12.701: INFO: (16) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 12.565092ms)
Jan 30 21:57:12.701: INFO: (16) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 14.004056ms)
Jan 30 21:57:12.701: INFO: (16) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 14.192594ms)
Jan 30 21:57:12.701: INFO: (16) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 13.309517ms)
Jan 30 21:57:12.701: INFO: (16) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 13.607379ms)
Jan 30 21:57:12.702: INFO: (16) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 13.039738ms)
Jan 30 21:57:12.702: INFO: (16) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 13.564886ms)
Jan 30 21:57:12.702: INFO: (16) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 15.63416ms)
Jan 30 21:57:12.712: INFO: (17) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 7.88468ms)
Jan 30 21:57:12.712: INFO: (17) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 8.259609ms)
Jan 30 21:57:12.712: INFO: (17) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 8.298408ms)
Jan 30 21:57:12.712: INFO: (17) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 8.415004ms)
Jan 30 21:57:12.712: INFO: (17) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 13.854808ms)
Jan 30 21:57:12.718: INFO: (17) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 13.861401ms)
Jan 30 21:57:12.718: INFO: (17) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 13.875808ms)
Jan 30 21:57:12.718: INFO: (17) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 13.956934ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 15.750795ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 15.816414ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 15.836128ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 15.818867ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 15.892741ms)
Jan 30 21:57:12.720: INFO: (17) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 16.056026ms)
Jan 30 21:57:12.725: INFO: (18) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 4.534741ms)
Jan 30 21:57:12.729: INFO: (18) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 8.958892ms)
Jan 30 21:57:12.732: INFO: (18) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:1080/proxy/: test<... (200; 11.479005ms)
Jan 30 21:57:12.732: INFO: (18) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 11.887188ms)
Jan 30 21:57:12.732: INFO: (18) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm/proxy/: test (200; 12.266673ms)
Jan 30 21:57:12.732: INFO: (18) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test<... (200; 4.630114ms)
Jan 30 21:57:12.741: INFO: (19) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:460/proxy/: tls baz (200; 4.778532ms)
Jan 30 21:57:12.742: INFO: (19) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 5.957424ms)
Jan 30 21:57:12.743: INFO: (19) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:462/proxy/: tls qux (200; 6.557069ms)
Jan 30 21:57:12.743: INFO: (19) /api/v1/namespaces/proxy-4051/pods/https:proxy-service-qqhgr-kb5sm:443/proxy/: test (200; 6.595864ms)
Jan 30 21:57:12.743: INFO: (19) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname1/proxy/: tls baz (200; 7.174041ms)
Jan 30 21:57:12.743: INFO: (19) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 7.04512ms)
Jan 30 21:57:12.744: INFO: (19) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:1080/proxy/: ... (200; 8.219495ms)
Jan 30 21:57:12.745: INFO: (19) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname1/proxy/: foo (200; 8.231451ms)
Jan 30 21:57:12.745: INFO: (19) /api/v1/namespaces/proxy-4051/services/https:proxy-service-qqhgr:tlsportname2/proxy/: tls qux (200; 8.403845ms)
Jan 30 21:57:12.745: INFO: (19) /api/v1/namespaces/proxy-4051/pods/http:proxy-service-qqhgr-kb5sm:160/proxy/: foo (200; 8.816242ms)
Jan 30 21:57:12.745: INFO: (19) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname1/proxy/: foo (200; 9.060461ms)
Jan 30 21:57:12.746: INFO: (19) /api/v1/namespaces/proxy-4051/services/http:proxy-service-qqhgr:portname2/proxy/: bar (200; 9.042905ms)
Jan 30 21:57:12.746: INFO: (19) /api/v1/namespaces/proxy-4051/services/proxy-service-qqhgr:portname2/proxy/: bar (200; 9.655808ms)
Jan 30 21:57:12.746: INFO: (19) /api/v1/namespaces/proxy-4051/pods/proxy-service-qqhgr-kb5sm:162/proxy/: bar (200; 9.951207ms)
STEP: deleting ReplicationController proxy-service-qqhgr in namespace proxy-4051, will wait for the garbage collector to delete the pods
Jan 30 21:57:12.808: INFO: Deleting ReplicationController proxy-service-qqhgr took: 8.362658ms
Jan 30 21:57:13.108: INFO: Terminating ReplicationController proxy-service-qqhgr pods took: 300.708979ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:57:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4051" for this suite.

• [SLOW TEST:21.310 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":123,"skipped":1764,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:57:22.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 30 21:57:22.598: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8810 /api/v1/namespaces/watch-8810/configmaps/e2e-watch-test-resource-version 6916e4ec-e468-4260-815b-2e68009d817b 5377520 0 2020-01-30 21:57:22 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 21:57:22.599: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-8810 /api/v1/namespaces/watch-8810/configmaps/e2e-watch-test-resource-version 6916e4ec-e468-4260-815b-2e68009d817b 5377521 0 2020-01-30 21:57:22 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:57:22.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8810" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":124,"skipped":1793,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:57:22.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 30 21:57:22.713: INFO: Waiting up to 5m0s for pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f" in namespace "emptydir-2316" to be "success or failure"
Jan 30 21:57:22.724: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.7468ms
Jan 30 21:57:24.731: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017170493s
Jan 30 21:57:26.763: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049993193s
Jan 30 21:57:28.771: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057826534s
Jan 30 21:57:30.780: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066669816s
STEP: Saw pod success
Jan 30 21:57:30.780: INFO: Pod "pod-7f139f73-01da-467f-b02e-47ae48bee89f" satisfied condition "success or failure"
Jan 30 21:57:30.784: INFO: Trying to get logs from node jerma-node pod pod-7f139f73-01da-467f-b02e-47ae48bee89f container test-container: 
STEP: delete the pod
Jan 30 21:57:30.835: INFO: Waiting for pod pod-7f139f73-01da-467f-b02e-47ae48bee89f to disappear
Jan 30 21:57:30.843: INFO: Pod pod-7f139f73-01da-467f-b02e-47ae48bee89f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:57:30.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2316" for this suite.

• [SLOW TEST:8.236 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1809,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:57:30.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 21:57:32.497: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 21:57:34.597: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:57:36.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:57:38.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018252, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 21:57:41.623: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 30 21:57:47.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-9048 to-be-attached-pod -i -c=container1'
Jan 30 21:57:47.998: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:57:48.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9048" for this suite.
STEP: Destroying namespace "webhook-9048-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:17.257 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":126,"skipped":1821,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:57:48.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 21:57:49.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 21:57:51.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:57:53.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:57:55.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 21:57:57.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716018269, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 21:58:00.186: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:58:00.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6286" for this suite.
STEP: Destroying namespace "webhook-6286-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.127 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":127,"skipped":1850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:58:01.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 30 21:58:01.410: INFO: Number of nodes with available pods: 0
Jan 30 21:58:01.411: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:02.784: INFO: Number of nodes with available pods: 0
Jan 30 21:58:02.784: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:03.894: INFO: Number of nodes with available pods: 0
Jan 30 21:58:03.895: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:04.422: INFO: Number of nodes with available pods: 0
Jan 30 21:58:04.422: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:05.481: INFO: Number of nodes with available pods: 0
Jan 30 21:58:05.481: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:07.311: INFO: Number of nodes with available pods: 0
Jan 30 21:58:07.311: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:07.667: INFO: Number of nodes with available pods: 0
Jan 30 21:58:07.667: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:08.946: INFO: Number of nodes with available pods: 0
Jan 30 21:58:08.946: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:09.430: INFO: Number of nodes with available pods: 0
Jan 30 21:58:09.430: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:10.422: INFO: Number of nodes with available pods: 1
Jan 30 21:58:10.422: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:11.427: INFO: Number of nodes with available pods: 1
Jan 30 21:58:11.428: INFO: Node jerma-node is running more than one daemon pod
Jan 30 21:58:12.425: INFO: Number of nodes with available pods: 2
Jan 30 21:58:12.425: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 30 21:58:12.516: INFO: Number of nodes with available pods: 1
Jan 30 21:58:12.517: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:13.534: INFO: Number of nodes with available pods: 1
Jan 30 21:58:13.534: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:14.533: INFO: Number of nodes with available pods: 1
Jan 30 21:58:14.533: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:15.528: INFO: Number of nodes with available pods: 1
Jan 30 21:58:15.528: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:16.539: INFO: Number of nodes with available pods: 1
Jan 30 21:58:16.539: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:17.534: INFO: Number of nodes with available pods: 1
Jan 30 21:58:17.534: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:18.533: INFO: Number of nodes with available pods: 1
Jan 30 21:58:18.533: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:19.533: INFO: Number of nodes with available pods: 1
Jan 30 21:58:19.533: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:20.550: INFO: Number of nodes with available pods: 1
Jan 30 21:58:20.551: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:21.666: INFO: Number of nodes with available pods: 1
Jan 30 21:58:21.667: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:22.601: INFO: Number of nodes with available pods: 1
Jan 30 21:58:22.601: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 21:58:23.620: INFO: Number of nodes with available pods: 2
Jan 30 21:58:23.620: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3567, will wait for the garbage collector to delete the pods
Jan 30 21:58:23.690: INFO: Deleting DaemonSet.extensions daemon-set took: 10.988883ms
Jan 30 21:58:23.991: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.897627ms
Jan 30 21:58:32.399: INFO: Number of nodes with available pods: 0
Jan 30 21:58:32.399: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 21:58:32.401: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3567/daemonsets","resourceVersion":"5377936"},"items":null}

Jan 30 21:58:32.403: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3567/pods","resourceVersion":"5377936"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:58:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3567" for this suite.

• [SLOW TEST:31.186 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":128,"skipped":1878,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:58:32.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 30 21:58:32.571: INFO: Waiting up to 5m0s for pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32" in namespace "downward-api-6092" to be "success or failure"
Jan 30 21:58:32.585: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32": Phase="Pending", Reason="", readiness=false. Elapsed: 13.826282ms
Jan 30 21:58:34.592: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020730278s
Jan 30 21:58:36.605: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03350571s
Jan 30 21:58:38.618: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046788914s
Jan 30 21:58:40.626: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055017015s
STEP: Saw pod success
Jan 30 21:58:40.626: INFO: Pod "downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32" satisfied condition "success or failure"
Jan 30 21:58:40.632: INFO: Trying to get logs from node jerma-node pod downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32 container dapi-container: 
STEP: delete the pod
Jan 30 21:58:40.733: INFO: Waiting for pod downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32 to disappear
Jan 30 21:58:40.739: INFO: Pod downward-api-ae1932ff-49ce-4ed4-8b6a-6005a7747e32 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:58:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6092" for this suite.

• [SLOW TEST:8.319 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":1886,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:58:40.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 30 21:58:40.935: INFO: >>> kubeConfig: /root/.kube/config
Jan 30 21:58:43.949: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:58:56.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7035" for this suite.

• [SLOW TEST:16.204 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":130,"skipped":1888,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:58:56.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 30 21:58:57.043: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 21:58:57.063: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 21:58:57.066: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 30 21:58:57.073: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.073: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 21:58:57.073: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 30 21:58:57.073: INFO: 	Container weave ready: true, restart count 1
Jan 30 21:58:57.073: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 21:58:57.073: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 30 21:58:57.098: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 30 21:58:57.098: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 21:58:57.098: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container weave ready: true, restart count 0
Jan 30 21:58:57.098: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 21:58:57.098: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 30 21:58:57.098: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 30 21:58:57.098: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container etcd ready: true, restart count 1
Jan 30 21:58:57.098: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container coredns ready: true, restart count 0
Jan 30 21:58:57.098: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 21:58:57.098: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-511f18d1-19f4-4e2a-9ba1-d1e72910d495 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-511f18d1-19f4-4e2a-9ba1-d1e72910d495 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-511f18d1-19f4-4e2a-9ba1-d1e72910d495
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:59:13.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9289" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.366 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":131,"skipped":1892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:59:13.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 30 21:59:29.895: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:29.906: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:31.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:31.915: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:33.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:33.916: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:35.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:35.913: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:37.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:37.913: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:39.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:39.911: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:41.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:41.912: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 30 21:59:43.907: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 30 21:59:43.919: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:59:43.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8348" for this suite.

• [SLOW TEST:30.613 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":1938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:59:43.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-da262cba-d5f4-4cfc-a89b-12e637a3df3b
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:59:44.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6042" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":133,"skipped":2000,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:59:44.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 30 21:59:44.203: INFO: Waiting up to 5m0s for pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42" in namespace "emptydir-3374" to be "success or failure"
Jan 30 21:59:44.219: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Pending", Reason="", readiness=false. Elapsed: 16.282604ms
Jan 30 21:59:46.228: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025330317s
Jan 30 21:59:48.236: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033027803s
Jan 30 21:59:50.242: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039293516s
Jan 30 21:59:52.247: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044091549s
Jan 30 21:59:54.253: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050827828s
STEP: Saw pod success
Jan 30 21:59:54.254: INFO: Pod "pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42" satisfied condition "success or failure"
Jan 30 21:59:54.257: INFO: Trying to get logs from node jerma-node pod pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42 container test-container: 
STEP: delete the pod
Jan 30 21:59:54.290: INFO: Waiting for pod pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42 to disappear
Jan 30 21:59:54.300: INFO: Pod pod-9cdd23a3-dee8-47e4-8fe9-c1b404d25c42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 21:59:54.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3374" for this suite.

• [SLOW TEST:10.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2007,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 21:59:54.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-386bfc14-da20-4d13-8d9a-6ceb0b6ea388
STEP: Creating a pod to test consume configMaps
Jan 30 21:59:54.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd" in namespace "configmap-7608" to be "success or failure"
Jan 30 21:59:54.497: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 48.637402ms
Jan 30 21:59:56.506: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057937517s
Jan 30 21:59:58.517: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068707129s
Jan 30 22:00:00.529: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080489885s
Jan 30 22:00:02.542: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093848311s
STEP: Saw pod success
Jan 30 22:00:02.543: INFO: Pod "pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd" satisfied condition "success or failure"
Jan 30 22:00:02.547: INFO: Trying to get logs from node jerma-node pod pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:00:02.591: INFO: Waiting for pod pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd to disappear
Jan 30 22:00:02.612: INFO: Pod pod-configmaps-15067021-1fcf-4b0c-a0c3-17e6597ac1bd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:00:02.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7608" for this suite.

• [SLOW TEST:8.345 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2007,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:00:02.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:00:02.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168" in namespace "projected-989" to be "success or failure"
Jan 30 22:00:02.850: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168": Phase="Pending", Reason="", readiness=false. Elapsed: 28.456492ms
Jan 30 22:00:04.860: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0381226s
Jan 30 22:00:06.866: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044447387s
Jan 30 22:00:08.873: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051333088s
Jan 30 22:00:10.882: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060569955s
STEP: Saw pod success
Jan 30 22:00:10.882: INFO: Pod "downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168" satisfied condition "success or failure"
Jan 30 22:00:10.888: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168 container client-container: 
STEP: delete the pod
Jan 30 22:00:10.959: INFO: Waiting for pod downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168 to disappear
Jan 30 22:00:10.966: INFO: Pod downwardapi-volume-c89eb337-9450-4f37-ae16-190fc873e168 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:00:10.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-989" for this suite.

• [SLOW TEST:8.314 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2013,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:00:10.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:00:19.144: INFO: Waiting up to 5m0s for pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a" in namespace "pods-2179" to be "success or failure"
Jan 30 22:00:19.201: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.403102ms
Jan 30 22:00:21.213: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068656287s
Jan 30 22:00:23.220: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075564994s
Jan 30 22:00:25.226: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081493124s
Jan 30 22:00:27.234: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089696968s
STEP: Saw pod success
Jan 30 22:00:27.234: INFO: Pod "client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a" satisfied condition "success or failure"
Jan 30 22:00:27.241: INFO: Trying to get logs from node jerma-node pod client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a container env3cont: 
STEP: delete the pod
Jan 30 22:00:27.305: INFO: Waiting for pod client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a to disappear
Jan 30 22:00:27.314: INFO: Pod client-envvars-58b7243c-6b42-4278-b481-4f77c05af65a no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:00:27.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2179" for this suite.

• [SLOW TEST:16.348 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2026,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:00:27.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 30 22:00:27.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:00:44.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2430" for this suite.

• [SLOW TEST:17.501 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":138,"skipped":2026,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:00:44.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 30 22:00:44.947: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8165 /api/v1/namespaces/watch-8165/configmaps/e2e-watch-test-watch-closed fdec19f6-2bae-4bdc-b448-5d1d1ad1a396 5378551 0 2020-01-30 22:00:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 22:00:44.948: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8165 /api/v1/namespaces/watch-8165/configmaps/e2e-watch-test-watch-closed fdec19f6-2bae-4bdc-b448-5d1d1ad1a396 5378552 0 2020-01-30 22:00:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 30 22:00:44.985: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8165 /api/v1/namespaces/watch-8165/configmaps/e2e-watch-test-watch-closed fdec19f6-2bae-4bdc-b448-5d1d1ad1a396 5378553 0 2020-01-30 22:00:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 22:00:44.985: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8165 /api/v1/namespaces/watch-8165/configmaps/e2e-watch-test-watch-closed fdec19f6-2bae-4bdc-b448-5d1d1ad1a396 5378554 0 2020-01-30 22:00:44 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:00:44.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8165" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":139,"skipped":2044,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:00:44.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 30 22:00:45.816: INFO: Pod name wrapped-volume-race-117afee2-4349-4e70-bd1c-8fa8d01712da: Found 0 pods out of 5
Jan 30 22:00:50.943: INFO: Pod name wrapped-volume-race-117afee2-4349-4e70-bd1c-8fa8d01712da: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-117afee2-4349-4e70-bd1c-8fa8d01712da in namespace emptydir-wrapper-7996, will wait for the garbage collector to delete the pods
Jan 30 22:01:17.097: INFO: Deleting ReplicationController wrapped-volume-race-117afee2-4349-4e70-bd1c-8fa8d01712da took: 15.292681ms
Jan 30 22:01:17.498: INFO: Terminating ReplicationController wrapped-volume-race-117afee2-4349-4e70-bd1c-8fa8d01712da pods took: 400.732672ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 22:01:33.374: INFO: Pod name wrapped-volume-race-13118509-34bd-4dc4-a7e1-fd57cc6694b3: Found 0 pods out of 5
Jan 30 22:01:38.492: INFO: Pod name wrapped-volume-race-13118509-34bd-4dc4-a7e1-fd57cc6694b3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-13118509-34bd-4dc4-a7e1-fd57cc6694b3 in namespace emptydir-wrapper-7996, will wait for the garbage collector to delete the pods
Jan 30 22:02:04.735: INFO: Deleting ReplicationController wrapped-volume-race-13118509-34bd-4dc4-a7e1-fd57cc6694b3 took: 8.247361ms
Jan 30 22:02:05.236: INFO: Terminating ReplicationController wrapped-volume-race-13118509-34bd-4dc4-a7e1-fd57cc6694b3 pods took: 500.783715ms
STEP: Creating RC which spawns configmap-volume pods
Jan 30 22:02:23.327: INFO: Pod name wrapped-volume-race-a5a61c9f-1b5b-46ef-9033-5352c35b1825: Found 0 pods out of 5
Jan 30 22:02:28.354: INFO: Pod name wrapped-volume-race-a5a61c9f-1b5b-46ef-9033-5352c35b1825: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a5a61c9f-1b5b-46ef-9033-5352c35b1825 in namespace emptydir-wrapper-7996, will wait for the garbage collector to delete the pods
Jan 30 22:02:56.464: INFO: Deleting ReplicationController wrapped-volume-race-a5a61c9f-1b5b-46ef-9033-5352c35b1825 took: 16.95166ms
Jan 30 22:02:56.965: INFO: Terminating ReplicationController wrapped-volume-race-a5a61c9f-1b5b-46ef-9033-5352c35b1825 pods took: 501.114122ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:07.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7996" for this suite.

• [SLOW TEST:142.620 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":140,"skipped":2052,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:07.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 30 22:03:07.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9807'
Jan 30 22:03:09.762: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 22:03:09.762: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718
Jan 30 22:03:11.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9807'
Jan 30 22:03:12.131: INFO: stderr: ""
Jan 30 22:03:12.132: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:12.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9807" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":141,"skipped":2057,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:12.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73
Jan 30 22:03:12.368: INFO: Pod name my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73: Found 0 pods out of 1
Jan 30 22:03:17.380: INFO: Pod name my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73: Found 1 pods out of 1
Jan 30 22:03:17.380: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73" are running
Jan 30 22:03:25.428: INFO: Pod "my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73-m4k85" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:03:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:03:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:03:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:03:12 +0000 UTC Reason: Message:}])
Jan 30 22:03:25.428: INFO: Trying to dial the pod
Jan 30 22:03:30.457: INFO: Controller my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73: Got expected result from replica 1 [my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73-m4k85]: "my-hostname-basic-8806227d-c49a-4a52-a817-4f76c05a8e73-m4k85", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:30.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2608" for this suite.

• [SLOW TEST:18.296 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":142,"skipped":2085,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:30.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jan 30 22:03:30.606: INFO: Waiting up to 5m0s for pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290" in namespace "containers-4245" to be "success or failure"
Jan 30 22:03:30.620: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290": Phase="Pending", Reason="", readiness=false. Elapsed: 13.268776ms
Jan 30 22:03:32.633: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026241502s
Jan 30 22:03:34.641: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034503451s
Jan 30 22:03:36.647: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040492139s
Jan 30 22:03:38.655: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048812556s
STEP: Saw pod success
Jan 30 22:03:38.655: INFO: Pod "client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290" satisfied condition "success or failure"
Jan 30 22:03:38.662: INFO: Trying to get logs from node jerma-node pod client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290 container test-container: 
STEP: delete the pod
Jan 30 22:03:38.755: INFO: Waiting for pod client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290 to disappear
Jan 30 22:03:38.762: INFO: Pod client-containers-651c3ac6-31c7-41b7-9c42-ebc3b0f19290 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:38.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4245" for this suite.

• [SLOW TEST:8.297 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2096,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:38.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 30 22:03:51.049: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3913 PodName:pod-sharedvolume-d6f55612-0338-4c04-a2c4-6bd343423aee ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:03:51.050: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:03:51.123286       8 log.go:172] (0xc001b8bd90) (0xc002060fa0) Create stream
I0130 22:03:51.123458       8 log.go:172] (0xc001b8bd90) (0xc002060fa0) Stream added, broadcasting: 1
I0130 22:03:51.126450       8 log.go:172] (0xc001b8bd90) Reply frame received for 1
I0130 22:03:51.126497       8 log.go:172] (0xc001b8bd90) (0xc0021b6500) Create stream
I0130 22:03:51.126508       8 log.go:172] (0xc001b8bd90) (0xc0021b6500) Stream added, broadcasting: 3
I0130 22:03:51.127932       8 log.go:172] (0xc001b8bd90) Reply frame received for 3
I0130 22:03:51.127972       8 log.go:172] (0xc001b8bd90) (0xc0020e0140) Create stream
I0130 22:03:51.127987       8 log.go:172] (0xc001b8bd90) (0xc0020e0140) Stream added, broadcasting: 5
I0130 22:03:51.129571       8 log.go:172] (0xc001b8bd90) Reply frame received for 5
I0130 22:03:51.229044       8 log.go:172] (0xc001b8bd90) Data frame received for 3
I0130 22:03:51.229319       8 log.go:172] (0xc0021b6500) (3) Data frame handling
I0130 22:03:51.229621       8 log.go:172] (0xc0021b6500) (3) Data frame sent
I0130 22:03:51.314803       8 log.go:172] (0xc001b8bd90) (0xc0021b6500) Stream removed, broadcasting: 3
I0130 22:03:51.314960       8 log.go:172] (0xc001b8bd90) Data frame received for 1
I0130 22:03:51.314976       8 log.go:172] (0xc002060fa0) (1) Data frame handling
I0130 22:03:51.314997       8 log.go:172] (0xc002060fa0) (1) Data frame sent
I0130 22:03:51.315110       8 log.go:172] (0xc001b8bd90) (0xc002060fa0) Stream removed, broadcasting: 1
I0130 22:03:51.315278       8 log.go:172] (0xc001b8bd90) (0xc0020e0140) Stream removed, broadcasting: 5
I0130 22:03:51.315298       8 log.go:172] (0xc001b8bd90) Go away received
I0130 22:03:51.315891       8 log.go:172] (0xc001b8bd90) (0xc002060fa0) Stream removed, broadcasting: 1
I0130 22:03:51.315910       8 log.go:172] (0xc001b8bd90) (0xc0021b6500) Stream removed, broadcasting: 3
I0130 22:03:51.315922       8 log.go:172] (0xc001b8bd90) (0xc0020e0140) Stream removed, broadcasting: 5
Jan 30 22:03:51.315: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:51.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3913" for this suite.

• [SLOW TEST:12.555 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":144,"skipped":2113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:51.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:03:51.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70" in namespace "downward-api-373" to be "success or failure"
Jan 30 22:03:51.444: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70": Phase="Pending", Reason="", readiness=false. Elapsed: 10.867466ms
Jan 30 22:03:53.450: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016190101s
Jan 30 22:03:55.499: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06565108s
Jan 30 22:03:57.506: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072727241s
Jan 30 22:03:59.512: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078925102s
STEP: Saw pod success
Jan 30 22:03:59.513: INFO: Pod "downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70" satisfied condition "success or failure"
Jan 30 22:03:59.515: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70 container client-container: 
STEP: delete the pod
Jan 30 22:03:59.706: INFO: Waiting for pod downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70 to disappear
Jan 30 22:03:59.713: INFO: Pod downwardapi-volume-6508b520-bdd7-4c9c-9457-a0790fe45c70 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:03:59.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-373" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:03:59.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:03:59.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:04:07.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7668" for this suite.

• [SLOW TEST:8.241 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:04:07.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 30 22:04:08.101: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 22:04:08.117: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 22:04:08.120: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 30 22:04:08.126: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.126: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:04:08.126: INFO: pod-logs-websocket-dbb9cc40-61d6-4018-9369-066875090d25 from pods-7668 started at 2020-01-30 22:04:00 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.126: INFO: 	Container main ready: true, restart count 0
Jan 30 22:04:08.126: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 30 22:04:08.126: INFO: 	Container weave ready: true, restart count 1
Jan 30 22:04:08.126: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 22:04:08.126: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 30 22:04:08.142: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 30 22:04:08.142: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 30 22:04:08.142: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container etcd ready: true, restart count 1
Jan 30 22:04:08.142: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:04:08.142: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:04:08.142: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 30 22:04:08.142: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:04:08.142: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 30 22:04:08.142: INFO: 	Container weave ready: true, restart count 0
Jan 30 22:04:08.142: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 30 22:04:08.300: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 30 22:04:08.300: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.300: INFO: Pod pod-logs-websocket-dbb9cc40-61d6-4018-9369-066875090d25 requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan 30 22:04:08.300: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
Jan 30 22:04:08.394: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1.15eec94c38a7f8eb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8286/filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1.15eec94d09dcdecb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1.15eec94db7d45495], Reason = [Created], Message = [Created container filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1.15eec94dd441e4cb], Reason = [Started], Message = [Started container filler-pod-56d5830f-f745-4b22-862d-0275f50eefd1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8e310b00-faa7-4789-944d-2007b572809c.15eec94c34e0f00b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8286/filler-pod-8e310b00-faa7-4789-944d-2007b572809c to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8e310b00-faa7-4789-944d-2007b572809c.15eec94d249b7588], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8e310b00-faa7-4789-944d-2007b572809c.15eec94dda3af7e4], Reason = [Created], Message = [Created container filler-pod-8e310b00-faa7-4789-944d-2007b572809c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8e310b00-faa7-4789-944d-2007b572809c.15eec94e03370566], Reason = [Started], Message = [Started container filler-pod-8e310b00-faa7-4789-944d-2007b572809c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eec94e1799cb5e], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eec94e2156e8f7], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:04:17.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8286" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:9.635 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":147,"skipped":2242,"failed":0}
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:04:17.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:04:17.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3798" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":148,"skipped":2243,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:04:17.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:04:30.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6978" for this suite.

• [SLOW TEST:12.256 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":149,"skipped":2254,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:04:30.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:04:30.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-245" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":150,"skipped":2260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:04:30.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-965e0bb0-2556-41c8-9cdf-6748c5368744 in namespace container-probe-1939
Jan 30 22:04:39.106: INFO: Started pod test-webserver-965e0bb0-2556-41c8-9cdf-6748c5368744 in namespace container-probe-1939
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 22:04:39.112: INFO: Initial restart count of pod test-webserver-965e0bb0-2556-41c8-9cdf-6748c5368744 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:08:40.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1939" for this suite.

• [SLOW TEST:249.939 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2319,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:08:40.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-5238/configmap-test-395a3b80-a8d9-4f37-b691-62d9d5d4952b
STEP: Creating a pod to test consume configMaps
Jan 30 22:08:40.769: INFO: Waiting up to 5m0s for pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05" in namespace "configmap-5238" to be "success or failure"
Jan 30 22:08:40.785: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05": Phase="Pending", Reason="", readiness=false. Elapsed: 15.596063ms
Jan 30 22:08:42.810: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040358944s
Jan 30 22:08:44.820: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050890483s
Jan 30 22:08:46.829: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059275551s
Jan 30 22:08:48.835: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066103191s
STEP: Saw pod success
Jan 30 22:08:48.835: INFO: Pod "pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05" satisfied condition "success or failure"
Jan 30 22:08:48.840: INFO: Trying to get logs from node jerma-node pod pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05 container env-test: 
STEP: delete the pod
Jan 30 22:08:49.026: INFO: Waiting for pod pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05 to disappear
Jan 30 22:08:49.036: INFO: Pod pod-configmaps-444e5c2b-0e19-459e-afe9-622901d61c05 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:08:49.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5238" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2322,"failed":0}
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:08:49.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-a4ca9fa9-75a6-4e39-ad64-e28d6905eef5
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-a4ca9fa9-75a6-4e39-ad64-e28d6905eef5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:08:59.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1819" for this suite.

• [SLOW TEST:10.359 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2322,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:08:59.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:09:07.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8362" for this suite.

• [SLOW TEST:8.140 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2345,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:09:07.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-d777d2cd-af01-4b4c-bc1e-beeb5086eb5b
STEP: Creating a pod to test consume secrets
Jan 30 22:09:07.682: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9" in namespace "projected-6456" to be "success or failure"
Jan 30 22:09:07.710: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.118878ms
Jan 30 22:09:09.719: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036866824s
Jan 30 22:09:11.726: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043846204s
Jan 30 22:09:13.735: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052826326s
Jan 30 22:09:15.742: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059654122s
STEP: Saw pod success
Jan 30 22:09:15.742: INFO: Pod "pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9" satisfied condition "success or failure"
Jan 30 22:09:15.747: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 22:09:15.864: INFO: Waiting for pod pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9 to disappear
Jan 30 22:09:15.887: INFO: Pod pod-projected-secrets-6d5941dd-17dc-45f6-a319-050dcc8c23e9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:09:15.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6456" for this suite.

• [SLOW TEST:8.340 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2346,"failed":0}
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:09:15.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jan 30 22:09:16.012: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7254" to be "success or failure"
Jan 30 22:09:16.026: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.760485ms
Jan 30 22:09:18.035: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022479164s
Jan 30 22:09:20.043: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030326689s
Jan 30 22:09:22.049: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036297325s
Jan 30 22:09:24.058: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04522247s
Jan 30 22:09:26.072: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05967266s
STEP: Saw pod success
Jan 30 22:09:26.073: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 30 22:09:26.079: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 30 22:09:26.147: INFO: Waiting for pod pod-host-path-test to disappear
Jan 30 22:09:26.183: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:09:26.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7254" for this suite.

• [SLOW TEST:10.297 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:09:26.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8692
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 22:09:26.410: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 22:10:00.705: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8692 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:10:00.705: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:10:00.762804       8 log.go:172] (0xc001d4a790) (0xc002061720) Create stream
I0130 22:10:00.762912       8 log.go:172] (0xc001d4a790) (0xc002061720) Stream added, broadcasting: 1
I0130 22:10:00.767090       8 log.go:172] (0xc001d4a790) Reply frame received for 1
I0130 22:10:00.767142       8 log.go:172] (0xc001d4a790) (0xc001b89d60) Create stream
I0130 22:10:00.767160       8 log.go:172] (0xc001d4a790) (0xc001b89d60) Stream added, broadcasting: 3
I0130 22:10:00.769207       8 log.go:172] (0xc001d4a790) Reply frame received for 3
I0130 22:10:00.769272       8 log.go:172] (0xc001d4a790) (0xc00214abe0) Create stream
I0130 22:10:00.769308       8 log.go:172] (0xc001d4a790) (0xc00214abe0) Stream added, broadcasting: 5
I0130 22:10:00.771297       8 log.go:172] (0xc001d4a790) Reply frame received for 5
I0130 22:10:01.884014       8 log.go:172] (0xc001d4a790) Data frame received for 3
I0130 22:10:01.884087       8 log.go:172] (0xc001b89d60) (3) Data frame handling
I0130 22:10:01.884111       8 log.go:172] (0xc001b89d60) (3) Data frame sent
I0130 22:10:01.984630       8 log.go:172] (0xc001d4a790) Data frame received for 1
I0130 22:10:01.984727       8 log.go:172] (0xc002061720) (1) Data frame handling
I0130 22:10:01.984751       8 log.go:172] (0xc002061720) (1) Data frame sent
I0130 22:10:01.984809       8 log.go:172] (0xc001d4a790) (0xc002061720) Stream removed, broadcasting: 1
I0130 22:10:01.985141       8 log.go:172] (0xc001d4a790) (0xc001b89d60) Stream removed, broadcasting: 3
I0130 22:10:01.985438       8 log.go:172] (0xc001d4a790) (0xc00214abe0) Stream removed, broadcasting: 5
I0130 22:10:01.985502       8 log.go:172] (0xc001d4a790) (0xc002061720) Stream removed, broadcasting: 1
I0130 22:10:01.985523       8 log.go:172] (0xc001d4a790) (0xc001b89d60) Stream removed, broadcasting: 3
I0130 22:10:01.985543       8 log.go:172] (0xc001d4a790) (0xc00214abe0) Stream removed, broadcasting: 5
I0130 22:10:01.985905       8 log.go:172] (0xc001d4a790) Go away received
Jan 30 22:10:01.986: INFO: Found all expected endpoints: [netserver-0]
Jan 30 22:10:01.992: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8692 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:10:01.992: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:10:02.048539       8 log.go:172] (0xc001c4a8f0) (0xc00214b180) Create stream
I0130 22:10:02.049136       8 log.go:172] (0xc001c4a8f0) (0xc00214b180) Stream added, broadcasting: 1
I0130 22:10:02.057300       8 log.go:172] (0xc001c4a8f0) Reply frame received for 1
I0130 22:10:02.057406       8 log.go:172] (0xc001c4a8f0) (0xc0023e3860) Create stream
I0130 22:10:02.057432       8 log.go:172] (0xc001c4a8f0) (0xc0023e3860) Stream added, broadcasting: 3
I0130 22:10:02.061311       8 log.go:172] (0xc001c4a8f0) Reply frame received for 3
I0130 22:10:02.061360       8 log.go:172] (0xc001c4a8f0) (0xc00214b2c0) Create stream
I0130 22:10:02.061382       8 log.go:172] (0xc001c4a8f0) (0xc00214b2c0) Stream added, broadcasting: 5
I0130 22:10:02.065367       8 log.go:172] (0xc001c4a8f0) Reply frame received for 5
I0130 22:10:03.161952       8 log.go:172] (0xc001c4a8f0) Data frame received for 3
I0130 22:10:03.162035       8 log.go:172] (0xc0023e3860) (3) Data frame handling
I0130 22:10:03.162056       8 log.go:172] (0xc0023e3860) (3) Data frame sent
I0130 22:10:03.257927       8 log.go:172] (0xc001c4a8f0) (0xc0023e3860) Stream removed, broadcasting: 3
I0130 22:10:03.258222       8 log.go:172] (0xc001c4a8f0) Data frame received for 1
I0130 22:10:03.258248       8 log.go:172] (0xc00214b180) (1) Data frame handling
I0130 22:10:03.258271       8 log.go:172] (0xc00214b180) (1) Data frame sent
I0130 22:10:03.258317       8 log.go:172] (0xc001c4a8f0) (0xc00214b180) Stream removed, broadcasting: 1
I0130 22:10:03.258697       8 log.go:172] (0xc001c4a8f0) (0xc00214b2c0) Stream removed, broadcasting: 5
I0130 22:10:03.258840       8 log.go:172] (0xc001c4a8f0) (0xc00214b180) Stream removed, broadcasting: 1
I0130 22:10:03.258863       8 log.go:172] (0xc001c4a8f0) (0xc0023e3860) Stream removed, broadcasting: 3
I0130 22:10:03.258884       8 log.go:172] (0xc001c4a8f0) (0xc00214b2c0) Stream removed, broadcasting: 5
I0130 22:10:03.259245       8 log.go:172] (0xc001c4a8f0) Go away received
Jan 30 22:10:03.259: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:10:03.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8692" for this suite.

• [SLOW TEST:37.079 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2397,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:10:03.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 30 22:10:22.517: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:22.541: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 22:10:24.541: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:24.548: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 22:10:26.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:26.550: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 22:10:28.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:28.553: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 22:10:30.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:30.555: INFO: Pod pod-with-prestop-http-hook still exists
Jan 30 22:10:32.542: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 30 22:10:32.551: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:10:32.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1065" for this suite.

• [SLOW TEST:29.315 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2455,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:10:32.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 30 22:10:32.749: INFO: Waiting up to 5m0s for pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6" in namespace "emptydir-2107" to be "success or failure"
Jan 30 22:10:32.767: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.833318ms
Jan 30 22:10:34.777: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027562819s
Jan 30 22:10:36.787: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037395044s
Jan 30 22:10:38.795: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045201823s
Jan 30 22:10:40.805: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055288942s
STEP: Saw pod success
Jan 30 22:10:40.805: INFO: Pod "pod-1a962c8d-1820-4dc9-bdca-667be467fcc6" satisfied condition "success or failure"
Jan 30 22:10:40.809: INFO: Trying to get logs from node jerma-node pod pod-1a962c8d-1820-4dc9-bdca-667be467fcc6 container test-container: 
STEP: delete the pod
Jan 30 22:10:40.851: INFO: Waiting for pod pod-1a962c8d-1820-4dc9-bdca-667be467fcc6 to disappear
Jan 30 22:10:40.876: INFO: Pod pod-1a962c8d-1820-4dc9-bdca-667be467fcc6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:10:40.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2107" for this suite.

• [SLOW TEST:8.299 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2466,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:10:40.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:10:41.054: INFO: Creating deployment "test-recreate-deployment"
Jan 30 22:10:41.072: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 30 22:10:41.152: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 30 22:10:43.556: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 30 22:10:44.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:10:46.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019041, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:10:48.414: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 30 22:10:48.436: INFO: Updating deployment test-recreate-deployment
Jan 30 22:10:48.436: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 30 22:10:49.021: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6506 /apis/apps/v1/namespaces/deployment-6506/deployments/test-recreate-deployment af110571-4672-4fb6-9f6e-ca7f373e1fde 5381342 2 2020-01-30 22:10:41 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007a2d18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-30 22:10:48 +0000 UTC,LastTransitionTime:2020-01-30 22:10:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-30 22:10:48 +0000 UTC,LastTransitionTime:2020-01-30 22:10:41 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 30 22:10:49.026: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-6506 /apis/apps/v1/namespaces/deployment-6506/replicasets/test-recreate-deployment-5f94c574ff 40de808f-85e6-4d38-9283-8bf6d0310f74 5381341 1 2020-01-30 22:10:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment af110571-4672-4fb6-9f6e-ca7f373e1fde 0xc003f32187 0xc003f32188}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f321f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:10:49.026: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 30 22:10:49.026: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-6506 /apis/apps/v1/namespaces/deployment-6506/replicasets/test-recreate-deployment-799c574856 ed7870ce-3ac6-4f79-b537-fc48bf5505f8 5381331 2 2020-01-30 22:10:41 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment af110571-4672-4fb6-9f6e-ca7f373e1fde 0xc003f32267 0xc003f32268}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f322d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:10:49.031: INFO: Pod "test-recreate-deployment-5f94c574ff-jbxlt" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-jbxlt test-recreate-deployment-5f94c574ff- deployment-6506 /api/v1/namespaces/deployment-6506/pods/test-recreate-deployment-5f94c574ff-jbxlt c63976ea-f30f-40b4-866f-b5e779537b7b 5381338 0 2020-01-30 22:10:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 40de808f-85e6-4d38-9283-8bf6d0310f74 0xc003f32be7 0xc003f32be8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dtvst,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dtvst,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dtvst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:10:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:10:49.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6506" for this suite.

• [SLOW TEST:8.141 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":160,"skipped":2473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:10:49.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:10:50.208: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:10:52.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:10:54.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:10:56.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:10:58.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:11:00.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019050, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:11:03.271: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
Jan 30 22:11:03.318: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 30 22:11:03.448: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:11:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1988" for this suite.
STEP: Destroying namespace "webhook-1988-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.587 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":161,"skipped":2499,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:11:03.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 30 22:11:03.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7237'
Jan 30 22:11:03.850: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 22:11:03.850: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan 30 22:11:03.868: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 30 22:11:03.938: INFO: scanned /root for discovery docs: 
Jan 30 22:11:03.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7237'
Jan 30 22:11:27.379: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 30 22:11:27.379: INFO: stdout: "Created e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420\nScaling up e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 30 22:11:27.379: INFO: stdout: "Created e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420\nScaling up e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 30 22:11:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7237'
Jan 30 22:11:27.522: INFO: stderr: ""
Jan 30 22:11:27.523: INFO: stdout: "e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420-gbhnt e2e-test-httpd-rc-fd6fz "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Jan 30 22:11:32.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7237'
Jan 30 22:11:32.686: INFO: stderr: ""
Jan 30 22:11:32.686: INFO: stdout: "e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420-gbhnt "
Jan 30 22:11:32.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420-gbhnt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7237'
Jan 30 22:11:32.822: INFO: stderr: ""
Jan 30 22:11:32.823: INFO: stdout: "true"
Jan 30 22:11:32.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420-gbhnt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7237'
Jan 30 22:11:32.920: INFO: stderr: ""
Jan 30 22:11:32.920: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 30 22:11:32.920: INFO: e2e-test-httpd-rc-ea872147feb8579a25f97db5040f9420-gbhnt is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678
Jan 30 22:11:32.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7237'
Jan 30 22:11:33.068: INFO: stderr: ""
Jan 30 22:11:33.068: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:11:33.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7237" for this suite.

• [SLOW TEST:29.451 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":162,"skipped":2507,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:11:33.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-f4c74cf5-6afd-48c6-b4f0-737cf186cf67
STEP: Creating a pod to test consume configMaps
Jan 30 22:11:33.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a" in namespace "configmap-7527" to be "success or failure"
Jan 30 22:11:33.172: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.36369ms
Jan 30 22:11:35.177: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014899602s
Jan 30 22:11:37.183: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020463977s
Jan 30 22:11:39.188: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025746078s
Jan 30 22:11:41.194: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.031989432s
Jan 30 22:11:43.201: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.039054047s
STEP: Saw pod success
Jan 30 22:11:43.202: INFO: Pod "pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a" satisfied condition "success or failure"
Jan 30 22:11:43.208: INFO: Trying to get logs from node jerma-node pod pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:11:43.289: INFO: Waiting for pod pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a to disappear
Jan 30 22:11:43.298: INFO: Pod pod-configmaps-23df6cec-60e6-483f-8f71-3ae0a280636a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:11:43.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7527" for this suite.

• [SLOW TEST:10.230 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2507,"failed":0}
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:11:43.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:11:43.471: INFO: Creating ReplicaSet my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a
Jan 30 22:11:43.552: INFO: Pod name my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a: Found 0 pods out of 1
Jan 30 22:11:48.574: INFO: Pod name my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a: Found 1 pods out of 1
Jan 30 22:11:48.574: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a" is running
Jan 30 22:11:50.589: INFO: Pod "my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a-q2nf8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:11:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:11:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:11:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-30 22:11:43 +0000 UTC Reason: Message:}])
Jan 30 22:11:50.589: INFO: Trying to dial the pod
Jan 30 22:11:55.621: INFO: Controller my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a: Got expected result from replica 1 [my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a-q2nf8]: "my-hostname-basic-ce95af0d-94bc-4708-8183-626a129be17a-q2nf8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:11:55.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7812" for this suite.

• [SLOW TEST:12.328 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":164,"skipped":2508,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:11:55.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-f251d3b8-3ac9-4ca6-93a6-23e4b08b4da1 in namespace container-probe-2424
Jan 30 22:12:03.826: INFO: Started pod busybox-f251d3b8-3ac9-4ca6-93a6-23e4b08b4da1 in namespace container-probe-2424
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 22:12:03.831: INFO: Initial restart count of pod busybox-f251d3b8-3ac9-4ca6-93a6-23e4b08b4da1 is 0
Jan 30 22:12:50.025: INFO: Restart count of pod container-probe-2424/busybox-f251d3b8-3ac9-4ca6-93a6-23e4b08b4da1 is now 1 (46.19369854s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:12:50.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2424" for this suite.

• [SLOW TEST:54.456 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2525,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:12:50.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:13:37.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7511" for this suite.

• [SLOW TEST:47.318 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2542,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:13:37.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:13:38.086: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:13:40.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:13:42.135: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:13:44.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019218, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019217, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:13:47.151: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:13:59.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2743" for this suite.
STEP: Destroying namespace "webhook-2743-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.377 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":167,"skipped":2548,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:13:59.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:14:00.587: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 30 22:14:02.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:14:04.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:14:06.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:14:08.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:14:10.630: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019240, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:14:13.665: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:14:13.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:14:15.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-1082" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:16.071 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":168,"skipped":2581,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:14:15.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 30 22:14:15.973: INFO: >>> kubeConfig: /root/.kube/config
Jan 30 22:14:19.974: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:14:32.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-93" for this suite.

• [SLOW TEST:17.150 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":169,"skipped":2588,"failed":0}
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:14:33.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-5578
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 22:14:33.184: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 22:15:11.436: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5578 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:15:11.436: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:15:11.502713       8 log.go:172] (0xc0068bc2c0) (0xc00214ad20) Create stream
I0130 22:15:11.502979       8 log.go:172] (0xc0068bc2c0) (0xc00214ad20) Stream added, broadcasting: 1
I0130 22:15:11.508004       8 log.go:172] (0xc0068bc2c0) Reply frame received for 1
I0130 22:15:11.508077       8 log.go:172] (0xc0068bc2c0) (0xc0020606e0) Create stream
I0130 22:15:11.508094       8 log.go:172] (0xc0068bc2c0) (0xc0020606e0) Stream added, broadcasting: 3
I0130 22:15:11.509830       8 log.go:172] (0xc0068bc2c0) Reply frame received for 3
I0130 22:15:11.509857       8 log.go:172] (0xc0068bc2c0) (0xc00214adc0) Create stream
I0130 22:15:11.509872       8 log.go:172] (0xc0068bc2c0) (0xc00214adc0) Stream added, broadcasting: 5
I0130 22:15:11.516080       8 log.go:172] (0xc0068bc2c0) Reply frame received for 5
I0130 22:15:11.637499       8 log.go:172] (0xc0068bc2c0) Data frame received for 3
I0130 22:15:11.637623       8 log.go:172] (0xc0020606e0) (3) Data frame handling
I0130 22:15:11.637654       8 log.go:172] (0xc0020606e0) (3) Data frame sent
I0130 22:15:11.764797       8 log.go:172] (0xc0068bc2c0) Data frame received for 1
I0130 22:15:11.765031       8 log.go:172] (0xc0068bc2c0) (0xc00214adc0) Stream removed, broadcasting: 5
I0130 22:15:11.765134       8 log.go:172] (0xc00214ad20) (1) Data frame handling
I0130 22:15:11.765164       8 log.go:172] (0xc00214ad20) (1) Data frame sent
I0130 22:15:11.765241       8 log.go:172] (0xc0068bc2c0) (0xc0020606e0) Stream removed, broadcasting: 3
I0130 22:15:11.765377       8 log.go:172] (0xc0068bc2c0) (0xc00214ad20) Stream removed, broadcasting: 1
I0130 22:15:11.765411       8 log.go:172] (0xc0068bc2c0) Go away received
I0130 22:15:11.766513       8 log.go:172] (0xc0068bc2c0) (0xc00214ad20) Stream removed, broadcasting: 1
I0130 22:15:11.766609       8 log.go:172] (0xc0068bc2c0) (0xc0020606e0) Stream removed, broadcasting: 3
I0130 22:15:11.766649       8 log.go:172] (0xc0068bc2c0) (0xc00214adc0) Stream removed, broadcasting: 5
Jan 30 22:15:11.767: INFO: Waiting for responses: map[]
Jan 30 22:15:11.776: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5578 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:15:11.776: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:15:11.833005       8 log.go:172] (0xc0068bc630) (0xc00214b0e0) Create stream
I0130 22:15:11.833199       8 log.go:172] (0xc0068bc630) (0xc00214b0e0) Stream added, broadcasting: 1
I0130 22:15:11.882196       8 log.go:172] (0xc0068bc630) Reply frame received for 1
I0130 22:15:11.883032       8 log.go:172] (0xc0068bc630) (0xc00214a140) Create stream
I0130 22:15:11.883074       8 log.go:172] (0xc0068bc630) (0xc00214a140) Stream added, broadcasting: 3
I0130 22:15:11.890887       8 log.go:172] (0xc0068bc630) Reply frame received for 3
I0130 22:15:11.891068       8 log.go:172] (0xc0068bc630) (0xc0020601e0) Create stream
I0130 22:15:11.891090       8 log.go:172] (0xc0068bc630) (0xc0020601e0) Stream added, broadcasting: 5
I0130 22:15:11.896100       8 log.go:172] (0xc0068bc630) Reply frame received for 5
I0130 22:15:12.024866       8 log.go:172] (0xc0068bc630) Data frame received for 3
I0130 22:15:12.025072       8 log.go:172] (0xc00214a140) (3) Data frame handling
I0130 22:15:12.025119       8 log.go:172] (0xc00214a140) (3) Data frame sent
I0130 22:15:12.167025       8 log.go:172] (0xc0068bc630) Data frame received for 1
I0130 22:15:12.167256       8 log.go:172] (0xc00214b0e0) (1) Data frame handling
I0130 22:15:12.167290       8 log.go:172] (0xc00214b0e0) (1) Data frame sent
I0130 22:15:12.167483       8 log.go:172] (0xc0068bc630) (0xc0020601e0) Stream removed, broadcasting: 5
I0130 22:15:12.167645       8 log.go:172] (0xc0068bc630) (0xc00214b0e0) Stream removed, broadcasting: 1
I0130 22:15:12.168346       8 log.go:172] (0xc0068bc630) (0xc00214a140) Stream removed, broadcasting: 3
I0130 22:15:12.168650       8 log.go:172] (0xc0068bc630) Go away received
I0130 22:15:12.168724       8 log.go:172] (0xc0068bc630) (0xc00214b0e0) Stream removed, broadcasting: 1
I0130 22:15:12.168748       8 log.go:172] (0xc0068bc630) (0xc00214a140) Stream removed, broadcasting: 3
I0130 22:15:12.168767       8 log.go:172] (0xc0068bc630) (0xc0020601e0) Stream removed, broadcasting: 5
Jan 30 22:15:12.169: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:15:12.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5578" for this suite.

• [SLOW TEST:39.172 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2591,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:15:12.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jan 30 22:15:12.295: INFO: Waiting up to 5m0s for pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f" in namespace "containers-5615" to be "success or failure"
Jan 30 22:15:12.326: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.314165ms
Jan 30 22:15:14.332: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036432932s
Jan 30 22:15:16.339: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043658778s
Jan 30 22:15:18.385: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089870743s
Jan 30 22:15:20.674: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378718438s
Jan 30 22:15:22.688: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392861616s
STEP: Saw pod success
Jan 30 22:15:22.688: INFO: Pod "client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f" satisfied condition "success or failure"
Jan 30 22:15:22.696: INFO: Trying to get logs from node jerma-node pod client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f container test-container: 
STEP: delete the pod
Jan 30 22:15:22.776: INFO: Waiting for pod client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f to disappear
Jan 30 22:15:22.783: INFO: Pod client-containers-d553a610-581b-485a-a1c5-7957e4aa9a7f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:15:22.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5615" for this suite.

• [SLOW TEST:10.624 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2613,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:15:22.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:15:22.962: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 30 22:15:22.999: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 30 22:15:28.032: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 30 22:15:30.047: INFO: Creating deployment "test-rolling-update-deployment"
Jan 30 22:15:30.052: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 30 22:15:30.085: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 30 22:15:32.096: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 30 22:15:32.100: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:15:34.106: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:15:36.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019330, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:15:38.136: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 30 22:15:38.147: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-7970 /apis/apps/v1/namespaces/deployment-7970/deployments/test-rolling-update-deployment 31fb5961-4eaf-4aba-b3a9-39021bca5d10 5382620 1 2020-01-30 22:15:30 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005750828  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-30 22:15:30 +0000 UTC,LastTransitionTime:2020-01-30 22:15:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-30 22:15:36 +0000 UTC,LastTransitionTime:2020-01-30 22:15:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 30 22:15:38.151: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-7970 /apis/apps/v1/namespaces/deployment-7970/replicasets/test-rolling-update-deployment-67cf4f6444 dacf0a78-28a4-43bd-9a52-04bbc4d9ea6b 5382610 1 2020-01-30 22:15:30 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 31fb5961-4eaf-4aba-b3a9-39021bca5d10 0xc005750cc7 0xc005750cc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005750d38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:15:38.151: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 30 22:15:38.151: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-7970 /apis/apps/v1/namespaces/deployment-7970/replicasets/test-rolling-update-controller 2ccc1500-ecd3-4a4c-86b7-3ff7d0d40a4f 5382619 2 2020-01-30 22:15:22 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 31fb5961-4eaf-4aba-b3a9-39021bca5d10 0xc005750bf7 0xc005750bf8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005750c58  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:15:38.155: INFO: Pod "test-rolling-update-deployment-67cf4f6444-5klbt" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-5klbt test-rolling-update-deployment-67cf4f6444- deployment-7970 /api/v1/namespaces/deployment-7970/pods/test-rolling-update-deployment-67cf4f6444-5klbt 42b12048-9e59-4983-9f4c-a9773a03eead 5382609 0 2020-01-30 22:15:30 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 dacf0a78-28a4-43bd-9a52-04bbc4d9ea6b 0xc005821557 0xc005821558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dm5gw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dm5gw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dm5gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:15:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:15:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:15:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-30 22:15:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 22:15:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://704a92d51c6f4f4486d49b5fb7a19a17bbfa9cf9ec34fa8ad6599ab6c082189c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:15:38.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7970" for this suite.

• [SLOW TEST:15.348 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":172,"skipped":2626,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:15:38.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:15:38.349: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 30 22:15:38.387: INFO: Number of nodes with available pods: 0
Jan 30 22:15:38.387: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:39.404: INFO: Number of nodes with available pods: 0
Jan 30 22:15:39.404: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:40.698: INFO: Number of nodes with available pods: 0
Jan 30 22:15:40.698: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:41.397: INFO: Number of nodes with available pods: 0
Jan 30 22:15:41.397: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:42.411: INFO: Number of nodes with available pods: 0
Jan 30 22:15:42.411: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:44.563: INFO: Number of nodes with available pods: 0
Jan 30 22:15:44.563: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:46.051: INFO: Number of nodes with available pods: 0
Jan 30 22:15:46.051: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:46.436: INFO: Number of nodes with available pods: 0
Jan 30 22:15:46.436: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:47.399: INFO: Number of nodes with available pods: 0
Jan 30 22:15:47.399: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:48.405: INFO: Number of nodes with available pods: 1
Jan 30 22:15:48.405: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:15:49.408: INFO: Number of nodes with available pods: 2
Jan 30 22:15:49.408: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 30 22:15:49.519: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:49.519: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:50.566: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:50.566: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:51.599: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:51.599: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:52.552: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:52.553: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:53.549: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:53.549: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:54.556: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:54.556: INFO: Wrong image for pod: daemon-set-n2l8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:54.556: INFO: Pod daemon-set-n2l8j is not available
Jan 30 22:15:55.548: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:15:55.549: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:56.580: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:15:56.580: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:57.550: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:15:57.550: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:58.556: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:15:58.556: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:15:59.550: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:15:59.550: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:00.578: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:16:00.578: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:01.547: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:16:01.547: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:02.553: INFO: Pod daemon-set-hjvvn is not available
Jan 30 22:16:02.553: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:03.551: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:04.555: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:05.551: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:06.554: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:07.550: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:07.550: INFO: Pod daemon-set-mxjtq is not available
Jan 30 22:16:08.587: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:08.587: INFO: Pod daemon-set-mxjtq is not available
Jan 30 22:16:09.549: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:09.549: INFO: Pod daemon-set-mxjtq is not available
Jan 30 22:16:10.555: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:10.555: INFO: Pod daemon-set-mxjtq is not available
Jan 30 22:16:11.551: INFO: Wrong image for pod: daemon-set-mxjtq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 30 22:16:11.552: INFO: Pod daemon-set-mxjtq is not available
Jan 30 22:16:12.553: INFO: Pod daemon-set-dzsm8 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 30 22:16:12.611: INFO: Number of nodes with available pods: 1
Jan 30 22:16:12.611: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:13.631: INFO: Number of nodes with available pods: 1
Jan 30 22:16:13.632: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:14.625: INFO: Number of nodes with available pods: 1
Jan 30 22:16:14.625: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:15.619: INFO: Number of nodes with available pods: 1
Jan 30 22:16:15.619: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:16.623: INFO: Number of nodes with available pods: 1
Jan 30 22:16:16.623: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:17.627: INFO: Number of nodes with available pods: 1
Jan 30 22:16:17.627: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:16:18.628: INFO: Number of nodes with available pods: 2
Jan 30 22:16:18.629: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1878, will wait for the garbage collector to delete the pods
Jan 30 22:16:18.730: INFO: Deleting DaemonSet.extensions daemon-set took: 24.991124ms
Jan 30 22:16:19.030: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.955274ms
Jan 30 22:16:32.436: INFO: Number of nodes with available pods: 0
Jan 30 22:16:32.437: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 22:16:32.441: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1878/daemonsets","resourceVersion":"5382847"},"items":null}

Jan 30 22:16:32.446: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1878/pods","resourceVersion":"5382847"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:16:32.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1878" for this suite.

• [SLOW TEST:54.370 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":173,"skipped":2634,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:16:32.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Jan 30 22:16:32.690: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix227332888/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:16:32.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8207" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":174,"skipped":2673,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:16:32.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:16:33.225: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:16:35.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:16:37.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:16:39.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:16:42.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
Jan 30 22:16:42.358: INFO: Waiting for webhook configuration to be ready...
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:16:42.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-52" for this suite.
STEP: Destroying namespace "webhook-52-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.138 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":175,"skipped":2708,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:16:42.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:16:43.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 30 22:16:46.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8768 create -f -'
Jan 30 22:16:49.025: INFO: stderr: ""
Jan 30 22:16:49.025: INFO: stdout: "e2e-test-crd-publish-openapi-272-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 30 22:16:49.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8768 delete e2e-test-crd-publish-openapi-272-crds test-cr'
Jan 30 22:16:49.175: INFO: stderr: ""
Jan 30 22:16:49.176: INFO: stdout: "e2e-test-crd-publish-openapi-272-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 30 22:16:49.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8768 apply -f -'
Jan 30 22:16:49.570: INFO: stderr: ""
Jan 30 22:16:49.570: INFO: stdout: "e2e-test-crd-publish-openapi-272-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 30 22:16:49.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8768 delete e2e-test-crd-publish-openapi-272-crds test-cr'
Jan 30 22:16:49.700: INFO: stderr: ""
Jan 30 22:16:49.700: INFO: stdout: "e2e-test-crd-publish-openapi-272-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 30 22:16:49.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-272-crds'
Jan 30 22:16:50.084: INFO: stderr: ""
Jan 30 22:16:50.084: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-272-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:16:53.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8768" for this suite.

• [SLOW TEST:10.661 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":176,"skipped":2734,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:16:53.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 30 22:16:53.705: INFO: Created pod &Pod{ObjectMeta:{dns-4248  dns-4248 /api/v1/namespaces/dns-4248/pods/dns-4248 4d37cc76-7a54-4e71-9bbc-dc71e8083245 5383029 0 2020-01-30 22:16:53 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mklwj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mklwj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mklwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 30 22:16:59.796: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4248 PodName:dns-4248 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:16:59.797: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:16:59.874776       8 log.go:172] (0xc004b12580) (0xc00214abe0) Create stream
I0130 22:16:59.874922       8 log.go:172] (0xc004b12580) (0xc00214abe0) Stream added, broadcasting: 1
I0130 22:16:59.884035       8 log.go:172] (0xc004b12580) Reply frame received for 1
I0130 22:16:59.884249       8 log.go:172] (0xc004b12580) (0xc00214ac80) Create stream
I0130 22:16:59.884295       8 log.go:172] (0xc004b12580) (0xc00214ac80) Stream added, broadcasting: 3
I0130 22:16:59.888088       8 log.go:172] (0xc004b12580) Reply frame received for 3
I0130 22:16:59.888383       8 log.go:172] (0xc004b12580) (0xc002060780) Create stream
I0130 22:16:59.888416       8 log.go:172] (0xc004b12580) (0xc002060780) Stream added, broadcasting: 5
I0130 22:16:59.891903       8 log.go:172] (0xc004b12580) Reply frame received for 5
I0130 22:17:00.010900       8 log.go:172] (0xc004b12580) Data frame received for 3
I0130 22:17:00.011014       8 log.go:172] (0xc00214ac80) (3) Data frame handling
I0130 22:17:00.011032       8 log.go:172] (0xc00214ac80) (3) Data frame sent
I0130 22:17:00.105723       8 log.go:172] (0xc004b12580) (0xc00214ac80) Stream removed, broadcasting: 3
I0130 22:17:00.106239       8 log.go:172] (0xc004b12580) Data frame received for 1
I0130 22:17:00.106271       8 log.go:172] (0xc00214abe0) (1) Data frame handling
I0130 22:17:00.106291       8 log.go:172] (0xc00214abe0) (1) Data frame sent
I0130 22:17:00.106325       8 log.go:172] (0xc004b12580) (0xc002060780) Stream removed, broadcasting: 5
I0130 22:17:00.106418       8 log.go:172] (0xc004b12580) (0xc00214abe0) Stream removed, broadcasting: 1
I0130 22:17:00.106449       8 log.go:172] (0xc004b12580) Go away received
I0130 22:17:00.106943       8 log.go:172] (0xc004b12580) (0xc00214abe0) Stream removed, broadcasting: 1
I0130 22:17:00.106956       8 log.go:172] (0xc004b12580) (0xc00214ac80) Stream removed, broadcasting: 3
I0130 22:17:00.106962       8 log.go:172] (0xc004b12580) (0xc002060780) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 30 22:17:00.107: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4248 PodName:dns-4248 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:17:00.107: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:17:00.150581       8 log.go:172] (0xc004b12bb0) (0xc00214ae60) Create stream
I0130 22:17:00.150784       8 log.go:172] (0xc004b12bb0) (0xc00214ae60) Stream added, broadcasting: 1
I0130 22:17:00.155829       8 log.go:172] (0xc004b12bb0) Reply frame received for 1
I0130 22:17:00.155921       8 log.go:172] (0xc004b12bb0) (0xc00240c3c0) Create stream
I0130 22:17:00.155934       8 log.go:172] (0xc004b12bb0) (0xc00240c3c0) Stream added, broadcasting: 3
I0130 22:17:00.160615       8 log.go:172] (0xc004b12bb0) Reply frame received for 3
I0130 22:17:00.160674       8 log.go:172] (0xc004b12bb0) (0xc00214b0e0) Create stream
I0130 22:17:00.160684       8 log.go:172] (0xc004b12bb0) (0xc00214b0e0) Stream added, broadcasting: 5
I0130 22:17:00.162097       8 log.go:172] (0xc004b12bb0) Reply frame received for 5
I0130 22:17:00.246649       8 log.go:172] (0xc004b12bb0) Data frame received for 3
I0130 22:17:00.246727       8 log.go:172] (0xc00240c3c0) (3) Data frame handling
I0130 22:17:00.246807       8 log.go:172] (0xc00240c3c0) (3) Data frame sent
I0130 22:17:00.319860       8 log.go:172] (0xc004b12bb0) Data frame received for 1
I0130 22:17:00.319940       8 log.go:172] (0xc004b12bb0) (0xc00214b0e0) Stream removed, broadcasting: 5
I0130 22:17:00.319995       8 log.go:172] (0xc00214ae60) (1) Data frame handling
I0130 22:17:00.320036       8 log.go:172] (0xc00214ae60) (1) Data frame sent
I0130 22:17:00.320063       8 log.go:172] (0xc004b12bb0) (0xc00240c3c0) Stream removed, broadcasting: 3
I0130 22:17:00.320099       8 log.go:172] (0xc004b12bb0) (0xc00214ae60) Stream removed, broadcasting: 1
I0130 22:17:00.320137       8 log.go:172] (0xc004b12bb0) Go away received
I0130 22:17:00.320449       8 log.go:172] (0xc004b12bb0) (0xc00214ae60) Stream removed, broadcasting: 1
I0130 22:17:00.320467       8 log.go:172] (0xc004b12bb0) (0xc00240c3c0) Stream removed, broadcasting: 3
I0130 22:17:00.320476       8 log.go:172] (0xc004b12bb0) (0xc00214b0e0) Stream removed, broadcasting: 5
Jan 30 22:17:00.320: INFO: Deleting pod dns-4248...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:17:00.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4248" for this suite.

• [SLOW TEST:6.779 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":177,"skipped":2744,"failed":0}
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:17:00.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 30 22:17:14.657: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:14.669: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:16.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:16.708: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:18.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:18.678: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:20.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:20.677: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:22.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:22.681: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:24.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:24.678: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:26.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:26.677: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:28.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:28.674: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:30.670: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:30.692: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 30 22:17:32.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 30 22:17:32.679: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:17:32.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7612" for this suite.

• [SLOW TEST:32.372 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2744,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:17:32.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-w27n
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 22:17:32.935: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w27n" in namespace "subpath-4756" to be "success or failure"
Jan 30 22:17:32.943: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.025889ms
Jan 30 22:17:34.951: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015709791s
Jan 30 22:17:36.973: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038071984s
Jan 30 22:17:38.981: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045368933s
Jan 30 22:17:40.988: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 8.052885862s
Jan 30 22:17:42.993: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 10.058029707s
Jan 30 22:17:44.998: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 12.062923711s
Jan 30 22:17:47.004: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 14.068430358s
Jan 30 22:17:49.013: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 16.078163127s
Jan 30 22:17:51.019: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 18.083400839s
Jan 30 22:17:53.027: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 20.091444986s
Jan 30 22:17:55.033: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 22.097454601s
Jan 30 22:17:57.040: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 24.104800955s
Jan 30 22:17:59.050: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 26.11483535s
Jan 30 22:18:01.059: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Running", Reason="", readiness=true. Elapsed: 28.12418099s
Jan 30 22:18:03.075: INFO: Pod "pod-subpath-test-configmap-w27n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.139335634s
STEP: Saw pod success
Jan 30 22:18:03.075: INFO: Pod "pod-subpath-test-configmap-w27n" satisfied condition "success or failure"
Jan 30 22:18:03.082: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-w27n container test-container-subpath-configmap-w27n: 
STEP: delete the pod
Jan 30 22:18:03.286: INFO: Waiting for pod pod-subpath-test-configmap-w27n to disappear
Jan 30 22:18:03.297: INFO: Pod pod-subpath-test-configmap-w27n no longer exists
STEP: Deleting pod pod-subpath-test-configmap-w27n
Jan 30 22:18:03.298: INFO: Deleting pod "pod-subpath-test-configmap-w27n" in namespace "subpath-4756"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4756" for this suite.

• [SLOW TEST:30.578 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":179,"skipped":2754,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:03.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 30 22:18:03.433: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:14.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2966" for this suite.

• [SLOW TEST:11.370 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":180,"skipped":2779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:14.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 30 22:18:25.520: INFO: Successfully updated pod "labelsupdate115bed1e-9d16-4dc0-b3f1-8d30db319639"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:27.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-711" for this suite.

• [SLOW TEST:12.884 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2850,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:27.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:18:27.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee" in namespace "projected-9375" to be "success or failure"
Jan 30 22:18:27.719: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee": Phase="Pending", Reason="", readiness=false. Elapsed: 28.428022ms
Jan 30 22:18:29.728: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037786971s
Jan 30 22:18:31.737: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046208623s
Jan 30 22:18:33.747: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056052502s
Jan 30 22:18:36.004: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.31282821s
STEP: Saw pod success
Jan 30 22:18:36.004: INFO: Pod "downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee" satisfied condition "success or failure"
Jan 30 22:18:36.008: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee container client-container: 
STEP: delete the pod
Jan 30 22:18:36.045: INFO: Waiting for pod downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee to disappear
Jan 30 22:18:36.048: INFO: Pod downwardapi-volume-d40f9402-bf56-498c-ac44-96a3c04b5eee no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:36.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9375" for this suite.

• [SLOW TEST:8.487 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2854,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:36.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:18:36.817: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:18:38.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019517, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:18:40.849: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019517, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:18:42.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019517, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:18:44.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019517, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019516, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:18:47.884: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:48.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9475" for this suite.
STEP: Destroying namespace "webhook-9475-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.127 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":183,"skipped":2893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:48.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jan 30 22:18:48.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 30 22:18:48.713: INFO: stderr: ""
Jan 30 22:18:48.713: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:18:48.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9228" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":184,"skipped":2915,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:18:48.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-cf015249-b279-45cc-9713-73f53562d08d in namespace container-probe-3030
Jan 30 22:19:00.818: INFO: Started pod liveness-cf015249-b279-45cc-9713-73f53562d08d in namespace container-probe-3030
STEP: checking the pod's current state and verifying that restartCount is present
Jan 30 22:19:00.823: INFO: Initial restart count of pod liveness-cf015249-b279-45cc-9713-73f53562d08d is 0
Jan 30 22:19:16.939: INFO: Restart count of pod container-probe-3030/liveness-cf015249-b279-45cc-9713-73f53562d08d is now 1 (16.115345027s elapsed)
Jan 30 22:19:35.021: INFO: Restart count of pod container-probe-3030/liveness-cf015249-b279-45cc-9713-73f53562d08d is now 2 (34.198237522s elapsed)
Jan 30 22:19:57.138: INFO: Restart count of pod container-probe-3030/liveness-cf015249-b279-45cc-9713-73f53562d08d is now 3 (56.315002078s elapsed)
Jan 30 22:20:17.209: INFO: Restart count of pod container-probe-3030/liveness-cf015249-b279-45cc-9713-73f53562d08d is now 4 (1m16.385701364s elapsed)
Jan 30 22:21:19.490: INFO: Restart count of pod container-probe-3030/liveness-cf015249-b279-45cc-9713-73f53562d08d is now 5 (2m18.667317282s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:21:19.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3030" for this suite.

• [SLOW TEST:150.863 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2917,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:21:19.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
STEP: creating the pod
Jan 30 22:21:19.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4078'
Jan 30 22:21:20.161: INFO: stderr: ""
Jan 30 22:21:20.161: INFO: stdout: "pod/pause created\n"
Jan 30 22:21:20.161: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 30 22:21:20.161: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4078" to be "running and ready"
Jan 30 22:21:20.172: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.060681ms
Jan 30 22:21:22.178: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01677039s
Jan 30 22:21:24.184: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022756158s
Jan 30 22:21:26.200: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038923838s
Jan 30 22:21:28.208: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.0466407s
Jan 30 22:21:28.208: INFO: Pod "pause" satisfied condition "running and ready"
Jan 30 22:21:28.208: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 30 22:21:28.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4078'
Jan 30 22:21:28.442: INFO: stderr: ""
Jan 30 22:21:28.442: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 30 22:21:28.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4078'
Jan 30 22:21:28.672: INFO: stderr: ""
Jan 30 22:21:28.673: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 30 22:21:28.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4078'
Jan 30 22:21:28.911: INFO: stderr: ""
Jan 30 22:21:28.911: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 30 22:21:28.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4078'
Jan 30 22:21:29.076: INFO: stderr: ""
Jan 30 22:21:29.076: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369
STEP: using delete to clean up resources
Jan 30 22:21:29.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4078'
Jan 30 22:21:29.347: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:21:29.347: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 30 22:21:29.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4078'
Jan 30 22:21:29.517: INFO: stderr: "No resources found in kubectl-4078 namespace.\n"
Jan 30 22:21:29.517: INFO: stdout: ""
Jan 30 22:21:29.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4078 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 22:21:29.639: INFO: stderr: ""
Jan 30 22:21:29.639: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:21:29.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4078" for this suite.

• [SLOW TEST:10.088 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":186,"skipped":2943,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:21:29.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:21:46.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8482" for this suite.

• [SLOW TEST:17.195 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":187,"skipped":2979,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:21:46.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8110
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-8110
I0130 22:21:47.683783       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8110, replica count: 2
I0130 22:21:50.741195       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 22:21:53.741672       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 22:21:56.742053       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 22:21:59.742733       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 22:21:59.743: INFO: Creating new exec pod
Jan 30 22:22:08.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8110 execpodgbpfc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 30 22:22:09.109: INFO: stderr: "I0130 22:22:08.935240    1996 log.go:172] (0xc00097adc0) (0xc0009746e0) Create stream\nI0130 22:22:08.935532    1996 log.go:172] (0xc00097adc0) (0xc0009746e0) Stream added, broadcasting: 1\nI0130 22:22:08.940342    1996 log.go:172] (0xc00097adc0) Reply frame received for 1\nI0130 22:22:08.942016    1996 log.go:172] (0xc00097adc0) (0xc0008fe280) Create stream\nI0130 22:22:08.942204    1996 log.go:172] (0xc00097adc0) (0xc0008fe280) Stream added, broadcasting: 3\nI0130 22:22:08.948053    1996 log.go:172] (0xc00097adc0) Reply frame received for 3\nI0130 22:22:08.948176    1996 log.go:172] (0xc00097adc0) (0xc0008fe000) Create stream\nI0130 22:22:08.948228    1996 log.go:172] (0xc00097adc0) (0xc0008fe000) Stream added, broadcasting: 5\nI0130 22:22:08.951469    1996 log.go:172] (0xc00097adc0) Reply frame received for 5\nI0130 22:22:09.023757    1996 log.go:172] (0xc00097adc0) Data frame received for 5\nI0130 22:22:09.023803    1996 log.go:172] (0xc0008fe000) (5) Data frame handling\nI0130 22:22:09.023822    1996 log.go:172] (0xc0008fe000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0130 22:22:09.026275    1996 log.go:172] (0xc00097adc0) Data frame received for 5\nI0130 22:22:09.026325    1996 log.go:172] (0xc0008fe000) (5) Data frame handling\nI0130 22:22:09.026344    1996 log.go:172] (0xc0008fe000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0130 22:22:09.101182    1996 log.go:172] (0xc00097adc0) Data frame received for 1\nI0130 22:22:09.101247    1996 log.go:172] (0xc0009746e0) (1) Data frame handling\nI0130 22:22:09.101285    1996 log.go:172] (0xc0009746e0) (1) Data frame sent\nI0130 22:22:09.101306    1996 log.go:172] (0xc00097adc0) (0xc0009746e0) Stream removed, broadcasting: 1\nI0130 22:22:09.101491    1996 log.go:172] (0xc00097adc0) (0xc0008fe280) Stream removed, broadcasting: 3\nI0130 22:22:09.101550    1996 log.go:172] (0xc00097adc0) (0xc0008fe000) Stream removed, broadcasting: 5\nI0130 22:22:09.101580    1996 log.go:172] (0xc00097adc0) Go away received\nI0130 22:22:09.102196    1996 log.go:172] (0xc00097adc0) (0xc0009746e0) Stream removed, broadcasting: 1\nI0130 22:22:09.102210    1996 log.go:172] (0xc00097adc0) (0xc0008fe280) Stream removed, broadcasting: 3\nI0130 22:22:09.102217    1996 log.go:172] (0xc00097adc0) (0xc0008fe000) Stream removed, broadcasting: 5\n"
Jan 30 22:22:09.109: INFO: stdout: ""
Jan 30 22:22:09.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8110 execpodgbpfc -- /bin/sh -x -c nc -zv -t -w 2 10.96.235.176 80'
Jan 30 22:22:09.482: INFO: stderr: "I0130 22:22:09.326459    2016 log.go:172] (0xc000c054a0) (0xc000af28c0) Create stream\nI0130 22:22:09.328167    2016 log.go:172] (0xc000c054a0) (0xc000af28c0) Stream added, broadcasting: 1\nI0130 22:22:09.334495    2016 log.go:172] (0xc000c054a0) Reply frame received for 1\nI0130 22:22:09.334563    2016 log.go:172] (0xc000c054a0) (0xc0006e4640) Create stream\nI0130 22:22:09.334580    2016 log.go:172] (0xc000c054a0) (0xc0006e4640) Stream added, broadcasting: 3\nI0130 22:22:09.335855    2016 log.go:172] (0xc000c054a0) Reply frame received for 3\nI0130 22:22:09.335917    2016 log.go:172] (0xc000c054a0) (0xc000533400) Create stream\nI0130 22:22:09.335939    2016 log.go:172] (0xc000c054a0) (0xc000533400) Stream added, broadcasting: 5\nI0130 22:22:09.338396    2016 log.go:172] (0xc000c054a0) Reply frame received for 5\nI0130 22:22:09.405584    2016 log.go:172] (0xc000c054a0) Data frame received for 5\nI0130 22:22:09.405663    2016 log.go:172] (0xc000533400) (5) Data frame handling\nI0130 22:22:09.405695    2016 log.go:172] (0xc000533400) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.235.176 80\nI0130 22:22:09.405980    2016 log.go:172] (0xc000c054a0) Data frame received for 5\nI0130 22:22:09.406000    2016 log.go:172] (0xc000533400) (5) Data frame handling\nI0130 22:22:09.406023    2016 log.go:172] (0xc000533400) (5) Data frame sent\nConnection to 10.96.235.176 80 port [tcp/http] succeeded!\nI0130 22:22:09.469937    2016 log.go:172] (0xc000c054a0) Data frame received for 1\nI0130 22:22:09.470352    2016 log.go:172] (0xc000c054a0) (0xc0006e4640) Stream removed, broadcasting: 3\nI0130 22:22:09.470411    2016 log.go:172] (0xc000af28c0) (1) Data frame handling\nI0130 22:22:09.470434    2016 log.go:172] (0xc000af28c0) (1) Data frame sent\nI0130 22:22:09.470448    2016 log.go:172] (0xc000c054a0) (0xc000af28c0) Stream removed, broadcasting: 1\nI0130 22:22:09.470627    2016 log.go:172] (0xc000c054a0) (0xc000533400) Stream removed, broadcasting: 5\nI0130 22:22:09.471160    2016 log.go:172] (0xc000c054a0) Go away received\nI0130 22:22:09.471882    2016 log.go:172] (0xc000c054a0) (0xc000af28c0) Stream removed, broadcasting: 1\nI0130 22:22:09.471993    2016 log.go:172] (0xc000c054a0) (0xc0006e4640) Stream removed, broadcasting: 3\nI0130 22:22:09.472046    2016 log.go:172] (0xc000c054a0) (0xc000533400) Stream removed, broadcasting: 5\n"
Jan 30 22:22:09.482: INFO: stdout: ""
Jan 30 22:22:09.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8110 execpodgbpfc -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31324'
Jan 30 22:22:09.846: INFO: stderr: "I0130 22:22:09.662404    2038 log.go:172] (0xc0004fcd10) (0xc0003595e0) Create stream\nI0130 22:22:09.662631    2038 log.go:172] (0xc0004fcd10) (0xc0003595e0) Stream added, broadcasting: 1\nI0130 22:22:09.666063    2038 log.go:172] (0xc0004fcd10) Reply frame received for 1\nI0130 22:22:09.666100    2038 log.go:172] (0xc0004fcd10) (0xc0008d6000) Create stream\nI0130 22:22:09.666106    2038 log.go:172] (0xc0004fcd10) (0xc0008d6000) Stream added, broadcasting: 3\nI0130 22:22:09.667301    2038 log.go:172] (0xc0004fcd10) Reply frame received for 3\nI0130 22:22:09.667325    2038 log.go:172] (0xc0004fcd10) (0xc000a04000) Create stream\nI0130 22:22:09.667336    2038 log.go:172] (0xc0004fcd10) (0xc000a04000) Stream added, broadcasting: 5\nI0130 22:22:09.668678    2038 log.go:172] (0xc0004fcd10) Reply frame received for 5\nI0130 22:22:09.749824    2038 log.go:172] (0xc0004fcd10) Data frame received for 5\nI0130 22:22:09.749899    2038 log.go:172] (0xc000a04000) (5) Data frame handling\nI0130 22:22:09.749938    2038 log.go:172] (0xc000a04000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31324\nI0130 22:22:09.750420    2038 log.go:172] (0xc0004fcd10) Data frame received for 5\nI0130 22:22:09.750440    2038 log.go:172] (0xc000a04000) (5) Data frame handling\nI0130 22:22:09.750459    2038 log.go:172] (0xc000a04000) (5) Data frame sent\nConnection to 10.96.2.250 31324 port [tcp/31324] succeeded!\nI0130 22:22:09.832013    2038 log.go:172] (0xc0004fcd10) Data frame received for 1\nI0130 22:22:09.832244    2038 log.go:172] (0xc0004fcd10) (0xc0008d6000) Stream removed, broadcasting: 3\nI0130 22:22:09.832464    2038 log.go:172] (0xc0003595e0) (1) Data frame handling\nI0130 22:22:09.832500    2038 log.go:172] (0xc0003595e0) (1) Data frame sent\nI0130 22:22:09.832505    2038 log.go:172] (0xc0004fcd10) (0xc0003595e0) Stream removed, broadcasting: 1\nI0130 22:22:09.833257    2038 log.go:172] (0xc0004fcd10) (0xc000a04000) Stream removed, broadcasting: 5\nI0130 22:22:09.833370    2038 log.go:172] (0xc0004fcd10) Go away received\nI0130 22:22:09.833412    2038 log.go:172] (0xc0004fcd10) (0xc0003595e0) Stream removed, broadcasting: 1\nI0130 22:22:09.833447    2038 log.go:172] (0xc0004fcd10) (0xc0008d6000) Stream removed, broadcasting: 3\nI0130 22:22:09.833460    2038 log.go:172] (0xc0004fcd10) (0xc000a04000) Stream removed, broadcasting: 5\n"
Jan 30 22:22:09.846: INFO: stdout: ""
Jan 30 22:22:09.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8110 execpodgbpfc -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31324'
Jan 30 22:22:10.200: INFO: stderr: "I0130 22:22:10.052701    2060 log.go:172] (0xc000bef550) (0xc000a2a640) Create stream\nI0130 22:22:10.053048    2060 log.go:172] (0xc000bef550) (0xc000a2a640) Stream added, broadcasting: 1\nI0130 22:22:10.067330    2060 log.go:172] (0xc000bef550) Reply frame received for 1\nI0130 22:22:10.067369    2060 log.go:172] (0xc000bef550) (0xc000695ae0) Create stream\nI0130 22:22:10.067378    2060 log.go:172] (0xc000bef550) (0xc000695ae0) Stream added, broadcasting: 3\nI0130 22:22:10.068266    2060 log.go:172] (0xc000bef550) Reply frame received for 3\nI0130 22:22:10.068284    2060 log.go:172] (0xc000bef550) (0xc0006346e0) Create stream\nI0130 22:22:10.068290    2060 log.go:172] (0xc000bef550) (0xc0006346e0) Stream added, broadcasting: 5\nI0130 22:22:10.069073    2060 log.go:172] (0xc000bef550) Reply frame received for 5\nI0130 22:22:10.125102    2060 log.go:172] (0xc000bef550) Data frame received for 5\nI0130 22:22:10.125178    2060 log.go:172] (0xc0006346e0) (5) Data frame handling\nI0130 22:22:10.125214    2060 log.go:172] (0xc0006346e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31324\nI0130 22:22:10.127288    2060 log.go:172] (0xc000bef550) Data frame received for 5\nI0130 22:22:10.127321    2060 log.go:172] (0xc0006346e0) (5) Data frame handling\nI0130 22:22:10.127342    2060 log.go:172] (0xc0006346e0) (5) Data frame sent\nConnection to 10.96.1.234 31324 port [tcp/31324] succeeded!\nI0130 22:22:10.188425    2060 log.go:172] (0xc000bef550) (0xc000695ae0) Stream removed, broadcasting: 3\nI0130 22:22:10.188640    2060 log.go:172] (0xc000bef550) Data frame received for 1\nI0130 22:22:10.188653    2060 log.go:172] (0xc000a2a640) (1) Data frame handling\nI0130 22:22:10.188669    2060 log.go:172] (0xc000a2a640) (1) Data frame sent\nI0130 22:22:10.188676    2060 log.go:172] (0xc000bef550) (0xc000a2a640) Stream removed, broadcasting: 1\nI0130 22:22:10.189451    2060 log.go:172] (0xc000bef550) (0xc0006346e0) Stream removed, broadcasting: 5\nI0130 22:22:10.189561    2060 log.go:172] (0xc000bef550) Go away received\nI0130 22:22:10.189752    2060 log.go:172] (0xc000bef550) (0xc000a2a640) Stream removed, broadcasting: 1\nI0130 22:22:10.189814    2060 log.go:172] (0xc000bef550) (0xc000695ae0) Stream removed, broadcasting: 3\nI0130 22:22:10.189839    2060 log.go:172] (0xc000bef550) (0xc0006346e0) Stream removed, broadcasting: 5\n"
Jan 30 22:22:10.200: INFO: stdout: ""
Jan 30 22:22:10.201: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8110" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:23.448 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":188,"skipped":2990,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:10.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-e91134af-4027-431e-b01d-41629092f61c
STEP: Creating a pod to test consume secrets
Jan 30 22:22:10.439: INFO: Waiting up to 5m0s for pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb" in namespace "secrets-5036" to be "success or failure"
Jan 30 22:22:10.443: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.630263ms
Jan 30 22:22:12.452: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012754321s
Jan 30 22:22:14.459: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020357974s
Jan 30 22:22:16.465: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0256973s
Jan 30 22:22:18.473: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033924713s
Jan 30 22:22:20.581: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142110534s
Jan 30 22:22:22.597: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.158540843s
STEP: Saw pod success
Jan 30 22:22:22.598: INFO: Pod "pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb" satisfied condition "success or failure"
Jan 30 22:22:22.601: INFO: Trying to get logs from node jerma-node pod pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb container secret-volume-test: 
STEP: delete the pod
Jan 30 22:22:22.646: INFO: Waiting for pod pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb to disappear
Jan 30 22:22:22.666: INFO: Pod pod-secrets-14e8a3d6-a1e9-4f6c-b940-a5b00cb3bacb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:22.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5036" for this suite.

• [SLOW TEST:12.493 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3006,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:22.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:22:23.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02" in namespace "downward-api-9413" to be "success or failure"
Jan 30 22:22:23.093: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02": Phase="Pending", Reason="", readiness=false. Elapsed: 11.236608ms
Jan 30 22:22:25.099: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017309241s
Jan 30 22:22:27.105: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0236318s
Jan 30 22:22:29.112: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030744986s
Jan 30 22:22:31.117: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035408601s
STEP: Saw pod success
Jan 30 22:22:31.117: INFO: Pod "downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02" satisfied condition "success or failure"
Jan 30 22:22:31.120: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02 container client-container: 
STEP: delete the pod
Jan 30 22:22:31.151: INFO: Waiting for pod downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02 to disappear
Jan 30 22:22:31.163: INFO: Pod downwardapi-volume-0488395b-fc38-440d-96b9-e33119669c02 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:31.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9413" for this suite.

• [SLOW TEST:8.358 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3035,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:31.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jan 30 22:22:31.284: INFO: Waiting up to 5m0s for pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d" in namespace "var-expansion-4909" to be "success or failure"
Jan 30 22:22:31.298: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.717341ms
Jan 30 22:22:33.310: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025410888s
Jan 30 22:22:35.316: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032290818s
Jan 30 22:22:37.340: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056104052s
Jan 30 22:22:39.346: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061623799s
Jan 30 22:22:41.353: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068786667s
STEP: Saw pod success
Jan 30 22:22:41.353: INFO: Pod "var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d" satisfied condition "success or failure"
Jan 30 22:22:41.358: INFO: Trying to get logs from node jerma-node pod var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d container dapi-container: 
STEP: delete the pod
Jan 30 22:22:41.466: INFO: Waiting for pod var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d to disappear
Jan 30 22:22:41.472: INFO: Pod var-expansion-7e2b54eb-4332-4368-9699-3d4080a5a77d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:41.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4909" for this suite.

• [SLOW TEST:10.307 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3079,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:41.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-be7fbe02-7740-4616-8dfe-83b253953ca7
STEP: Creating a pod to test consume configMaps
Jan 30 22:22:41.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a" in namespace "configmap-7919" to be "success or failure"
Jan 30 22:22:41.630: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.433506ms
Jan 30 22:22:43.638: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049685875s
Jan 30 22:22:45.645: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056898592s
Jan 30 22:22:47.651: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062954927s
Jan 30 22:22:49.661: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072406606s
STEP: Saw pod success
Jan 30 22:22:49.661: INFO: Pod "pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a" satisfied condition "success or failure"
Jan 30 22:22:49.666: INFO: Trying to get logs from node jerma-node pod pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:22:49.709: INFO: Waiting for pod pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a to disappear
Jan 30 22:22:49.797: INFO: Pod pod-configmaps-487432b6-bde9-469d-b81d-8ebe62cd075a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:49.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7919" for this suite.

• [SLOW TEST:8.337 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3079,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:49.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3524" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":193,"skipped":3094,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:50.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 30 22:22:50.124: INFO: Waiting up to 5m0s for pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b" in namespace "emptydir-8909" to be "success or failure"
Jan 30 22:22:50.143: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.197319ms
Jan 30 22:22:52.152: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027813303s
Jan 30 22:22:54.158: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034214508s
Jan 30 22:22:56.197: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073295379s
Jan 30 22:22:58.204: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079435785s
STEP: Saw pod success
Jan 30 22:22:58.204: INFO: Pod "pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b" satisfied condition "success or failure"
Jan 30 22:22:58.207: INFO: Trying to get logs from node jerma-node pod pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b container test-container: 
STEP: delete the pod
Jan 30 22:22:58.554: INFO: Waiting for pod pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b to disappear
Jan 30 22:22:58.578: INFO: Pod pod-46afa03a-3fdf-41b9-97ac-1d5179575e5b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:22:58.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8909" for this suite.

• [SLOW TEST:8.570 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3140,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:22:58.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-9492
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9492 to expose endpoints map[]
Jan 30 22:22:58.766: INFO: successfully validated that service endpoint-test2 in namespace services-9492 exposes endpoints map[] (28.523548ms elapsed)
STEP: Creating pod pod1 in namespace services-9492
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9492 to expose endpoints map[pod1:[80]]
Jan 30 22:23:02.960: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.174500971s elapsed, will retry)
Jan 30 22:23:07.018: INFO: successfully validated that service endpoint-test2 in namespace services-9492 exposes endpoints map[pod1:[80]] (8.232556782s elapsed)
STEP: Creating pod pod2 in namespace services-9492
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9492 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 30 22:23:11.420: INFO: Unexpected endpoints: found map[9e600fdb-9428-4856-bafe-8aabba0e6f6f:[80]], expected map[pod1:[80] pod2:[80]] (4.388567576s elapsed, will retry)
Jan 30 22:23:13.456: INFO: successfully validated that service endpoint-test2 in namespace services-9492 exposes endpoints map[pod1:[80] pod2:[80]] (6.424259016s elapsed)
STEP: Deleting pod pod1 in namespace services-9492
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9492 to expose endpoints map[pod2:[80]]
Jan 30 22:23:13.518: INFO: successfully validated that service endpoint-test2 in namespace services-9492 exposes endpoints map[pod2:[80]] (46.78878ms elapsed)
STEP: Deleting pod pod2 in namespace services-9492
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9492 to expose endpoints map[]
Jan 30 22:23:14.649: INFO: successfully validated that service endpoint-test2 in namespace services-9492 exposes endpoints map[] (1.118931475s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:23:14.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9492" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:16.160 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":195,"skipped":3150,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:23:14.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3835
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3835
STEP: Creating statefulset with conflicting port in namespace statefulset-3835
STEP: Waiting until pod test-pod will start running in namespace statefulset-3835
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3835
Jan 30 22:23:25.017: INFO: Observed stateful pod in namespace: statefulset-3835, name: ss-0, uid: 892bfcc0-c1fd-4be4-8ad4-bc2c76c099e8, status phase: Pending. Waiting for statefulset controller to delete.
Jan 30 22:23:25.108: INFO: Observed stateful pod in namespace: statefulset-3835, name: ss-0, uid: 892bfcc0-c1fd-4be4-8ad4-bc2c76c099e8, status phase: Failed. Waiting for statefulset controller to delete.
Jan 30 22:23:25.138: INFO: Observed stateful pod in namespace: statefulset-3835, name: ss-0, uid: 892bfcc0-c1fd-4be4-8ad4-bc2c76c099e8, status phase: Failed. Waiting for statefulset controller to delete.
Jan 30 22:23:25.187: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3835
STEP: Removing pod with conflicting port in namespace statefulset-3835
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3835 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 30 22:23:35.305: INFO: Deleting all statefulset in ns statefulset-3835
Jan 30 22:23:35.310: INFO: Scaling statefulset ss to 0
Jan 30 22:23:55.342: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 22:23:55.347: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:23:55.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3835" for this suite.

• [SLOW TEST:40.634 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":196,"skipped":3155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:23:55.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 30 22:24:04.092: INFO: Successfully updated pod "annotationupdate8e814a17-af35-448d-82cf-4bdfad133dfe"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:24:06.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5593" for this suite.

• [SLOW TEST:10.750 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3239,"failed":0}
S
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:24:06.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:24:38.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9006" for this suite.

• [SLOW TEST:32.149 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":198,"skipped":3240,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:24:38.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-d57a8c4a-b342-4a05-868a-ceac1235bd18
STEP: Creating a pod to test consume configMaps
Jan 30 22:24:38.525: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3" in namespace "projected-678" to be "success or failure"
Jan 30 22:24:38.650: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 125.377268ms
Jan 30 22:24:40.661: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135773093s
Jan 30 22:24:42.669: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143969078s
Jan 30 22:24:44.679: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154127694s
Jan 30 22:24:46.686: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160500937s
Jan 30 22:24:48.692: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16718719s
STEP: Saw pod success
Jan 30 22:24:48.692: INFO: Pod "pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3" satisfied condition "success or failure"
Jan 30 22:24:48.698: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 22:24:48.891: INFO: Waiting for pod pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3 to disappear
Jan 30 22:24:48.913: INFO: Pod pod-projected-configmaps-adc9d366-45ac-4342-99cf-a4985397a6a3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:24:48.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-678" for this suite.

• [SLOW TEST:10.643 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3243,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:24:48.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.13.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.13.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.13.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.13.211_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2997.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2997.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2997.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2997.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2997.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.13.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.13.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.13.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.13.211_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:25:01.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.154: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.187: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.194: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.197: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:01.218: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:06.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.235: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.242: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.248: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.307: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.326: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.333: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:06.413: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:11.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.239: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.307: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.315: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.322: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.328: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:11.371: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:16.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.229: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.233: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.257: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.260: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.263: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.266: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:16.289: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:21.227: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.231: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.236: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.240: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.285: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.293: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.297: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:21.320: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:26.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.236: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.240: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.271: INFO: Unable to read jessie_udp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.274: INFO: Unable to read jessie_tcp@dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.278: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local from pod dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e: the server could not find the requested resource (get pods dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e)
Jan 30 22:25:26.306: INFO: Lookups using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e failed for: [wheezy_udp@dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@dns-test-service.dns-2997.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_udp@dns-test-service.dns-2997.svc.cluster.local jessie_tcp@dns-test-service.dns-2997.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2997.svc.cluster.local]

Jan 30 22:25:31.307: INFO: DNS probes using dns-2997/dns-test-f1ea34c4-96b3-45de-8891-c70ae5108c1e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:25:31.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2997" for this suite.

• [SLOW TEST:42.649 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":200,"skipped":3261,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:25:31.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:25:32.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:25:34.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:25:36.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:25:38.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:25:40.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716019932, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:25:43.489: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:25:43.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1801-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:25:44.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4892" for this suite.
STEP: Destroying namespace "webhook-4892-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.337 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":201,"skipped":3268,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:25:44.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2349.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:25:59.114: INFO: DNS probes using dns-2349/dns-test-19c76bdd-d696-496d-ab51-e8ab2f65c4e9 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:25:59.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2349" for this suite.

• [SLOW TEST:14.277 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":202,"skipped":3275,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:25:59.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jan 30 22:25:59.353: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 30 22:25:59.404: INFO: Waiting for terminating namespaces to be deleted...
Jan 30 22:25:59.411: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 30 22:25:59.418: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.418: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:25:59.418: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 30 22:25:59.418: INFO: 	Container weave ready: true, restart count 1
Jan 30 22:25:59.418: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 22:25:59.418: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 30 22:25:59.436: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 30 22:25:59.436: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 30 22:25:59.436: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container etcd ready: true, restart count 1
Jan 30 22:25:59.436: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:25:59.436: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:25:59.436: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 30 22:25:59.436: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:25:59.436: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 30 22:25:59.436: INFO: 	Container weave ready: true, restart count 0
Jan 30 22:25:59.436: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-05f671a3-f70c-4afe-a2d3-4c76044c07e2 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-05f671a3-f70c-4afe-a2d3-4c76044c07e2 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-05f671a3-f70c-4afe-a2d3-4c76044c07e2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:26:34.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5555" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:35.154 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":203,"skipped":3286,"failed":0}
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:26:34.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:26:34.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:26:41.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1438" for this suite.

• [SLOW TEST:7.006 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3286,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:26:41.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-9293dd28-c5f6-43d8-bb39-b65b4131193a
STEP: Creating a pod to test consume configMaps
Jan 30 22:26:41.634: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a" in namespace "configmap-4854" to be "success or failure"
Jan 30 22:26:41.670: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.178005ms
Jan 30 22:26:43.676: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041043494s
Jan 30 22:26:45.683: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048325314s
Jan 30 22:26:48.245: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610885645s
Jan 30 22:26:50.252: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Running", Reason="", readiness=true. Elapsed: 8.617757511s
Jan 30 22:26:52.358: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.723849743s
STEP: Saw pod success
Jan 30 22:26:52.359: INFO: Pod "pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a" satisfied condition "success or failure"
Jan 30 22:26:52.382: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:26:52.675: INFO: Waiting for pod pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a to disappear
Jan 30 22:26:52.694: INFO: Pod pod-configmaps-4a295d1f-fc81-4688-8587-90d8fd86579a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:26:52.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4854" for this suite.

• [SLOW TEST:11.391 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3384,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:26:52.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:27:05.016: INFO: DNS probes using dns-test-e7b126d4-ba0c-4bb6-95c4-9deddef1708c succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:27:19.144: INFO: File wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:19.148: INFO: File jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:19.148: INFO: Lookups using dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 failed for: [wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local]

Jan 30 22:27:24.156: INFO: File wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:24.162: INFO: File jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:24.162: INFO: Lookups using dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 failed for: [wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local]

Jan 30 22:27:29.155: INFO: File wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:29.159: INFO: File jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local from pod  dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 30 22:27:29.159: INFO: Lookups using dns-5804/dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 failed for: [wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local]

Jan 30 22:27:34.451: INFO: DNS probes using dns-test-ed079257-ad1c-4a41-93b2-ada48acb0262 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5804.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5804.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:27:48.855: INFO: DNS probes using dns-test-155cb37f-6b14-4c9e-952e-15587cfef97c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:27:49.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5804" for this suite.

• [SLOW TEST:56.265 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":206,"skipped":3391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:27:49.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:27:49.172: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 30 22:27:54.192: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 30 22:27:58.212: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jan 30 22:27:58.252: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1475 /apis/apps/v1/namespaces/deployment-1475/deployments/test-cleanup-deployment 528302ee-6325-4797-9d9e-f9ff39558a50 5385889 1 2020-01-30 22:27:58 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043644d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jan 30 22:27:58.264: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-1475 /apis/apps/v1/namespaces/deployment-1475/replicasets/test-cleanup-deployment-55ffc6b7b6 eeb1afa5-1402-485d-9b28-5de84b670194 5385891 1 2020-01-30 22:27:58 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 528302ee-6325-4797-9d9e-f9ff39558a50 0xc0033d9537 0xc0033d9538}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033d95a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:27:58.264: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 30 22:27:58.265: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-1475 /apis/apps/v1/namespaces/deployment-1475/replicasets/test-cleanup-controller 3ba4468b-d617-4920-ad73-787d9f34e51a 5385890 1 2020-01-30 22:27:49 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 528302ee-6325-4797-9d9e-f9ff39558a50 0xc0033d943f 0xc0033d9450}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0033d94b8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 30 22:27:58.385: INFO: Pod "test-cleanup-controller-ckwt6" is available:
&Pod{ObjectMeta:{test-cleanup-controller-ckwt6 test-cleanup-controller- deployment-1475 /api/v1/namespaces/deployment-1475/pods/test-cleanup-controller-ckwt6 d83c6fc7-dcec-42a3-b690-8a650d28309e 5385887 0 2020-01-30 22:27:49 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 3ba4468b-d617-4920-ad73-787d9f34e51a 0xc0033d9a17 0xc0033d9a18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vhc8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vhc8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vhc8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:27:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:27:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:27:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:27:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-30 22:27:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-30 22:27:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://56cb88c186b5153313c9dd58d2091cd7904d59d4757680ffdcd43414d466da2e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 30 22:27:58.385: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-s5nvk" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-s5nvk test-cleanup-deployment-55ffc6b7b6- deployment-1475 /api/v1/namespaces/deployment-1475/pods/test-cleanup-deployment-55ffc6b7b6-s5nvk f2968094-4b70-4977-92a4-ed7187b0e655 5385897 0 2020-01-30 22:27:58 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 eeb1afa5-1402-485d-9b28-5de84b670194 0xc0033d9b97 0xc0033d9b98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vhc8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vhc8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vhc8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-30 22:27:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:27:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1475" for this suite.

• [SLOW TEST:9.447 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":207,"skipped":3425,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:27:58.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 30 22:27:58.644: INFO: Waiting up to 5m0s for pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae" in namespace "downward-api-5064" to be "success or failure"
Jan 30 22:27:58.662: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 17.449715ms
Jan 30 22:28:00.679: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034028483s
Jan 30 22:28:02.687: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041841541s
Jan 30 22:28:04.692: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046903688s
Jan 30 22:28:06.698: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053384999s
Jan 30 22:28:08.704: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059595563s
Jan 30 22:28:10.714: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Pending", Reason="", readiness=false. Elapsed: 12.06909481s
Jan 30 22:28:12.726: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.081154354s
STEP: Saw pod success
Jan 30 22:28:12.727: INFO: Pod "downward-api-408fda70-a645-4263-98ce-eeff889f4bae" satisfied condition "success or failure"
Jan 30 22:28:12.786: INFO: Trying to get logs from node jerma-node pod downward-api-408fda70-a645-4263-98ce-eeff889f4bae container dapi-container: 
STEP: delete the pod
Jan 30 22:28:12.852: INFO: Waiting for pod downward-api-408fda70-a645-4263-98ce-eeff889f4bae to disappear
Jan 30 22:28:12.937: INFO: Pod downward-api-408fda70-a645-4263-98ce-eeff889f4bae no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:28:12.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5064" for this suite.

• [SLOW TEST:14.473 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3480,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:28:12.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 30 22:28:21.650: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e47bf298-78a0-4639-aa3f-89af7b95953f"
Jan 30 22:28:21.651: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e47bf298-78a0-4639-aa3f-89af7b95953f" in namespace "pods-8169" to be "terminated due to deadline exceeded"
Jan 30 22:28:21.657: INFO: Pod "pod-update-activedeadlineseconds-e47bf298-78a0-4639-aa3f-89af7b95953f": Phase="Running", Reason="", readiness=true. Elapsed: 6.061602ms
Jan 30 22:28:23.665: INFO: Pod "pod-update-activedeadlineseconds-e47bf298-78a0-4639-aa3f-89af7b95953f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.01404352s
Jan 30 22:28:23.665: INFO: Pod "pod-update-activedeadlineseconds-e47bf298-78a0-4639-aa3f-89af7b95953f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:28:23.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8169" for this suite.

• [SLOW TEST:10.733 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3485,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:28:23.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:28:23.954: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030" in namespace "downward-api-3655" to be "success or failure"
Jan 30 22:28:23.965: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030": Phase="Pending", Reason="", readiness=false. Elapsed: 10.357798ms
Jan 30 22:28:25.973: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018483726s
Jan 30 22:28:27.978: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023676422s
Jan 30 22:28:29.984: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03014642s
Jan 30 22:28:31.990: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03608468s
STEP: Saw pod success
Jan 30 22:28:31.991: INFO: Pod "downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030" satisfied condition "success or failure"
Jan 30 22:28:31.994: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030 container client-container: 
STEP: delete the pod
Jan 30 22:28:32.077: INFO: Waiting for pod downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030 to disappear
Jan 30 22:28:32.083: INFO: Pod downwardapi-volume-ff8e7586-27ba-4eae-abd1-c68a10385030 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:28:32.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3655" for this suite.

• [SLOW TEST:8.404 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3492,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:28:32.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-8379/configmap-test-66445800-0203-4260-8e5e-bbcd3980e5c6
STEP: Creating a pod to test consume configMaps
Jan 30 22:28:32.249: INFO: Waiting up to 5m0s for pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a" in namespace "configmap-8379" to be "success or failure"
Jan 30 22:28:32.257: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.477666ms
Jan 30 22:28:34.265: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015578214s
Jan 30 22:28:36.278: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028997633s
Jan 30 22:28:38.291: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041253543s
Jan 30 22:28:40.299: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04907171s
STEP: Saw pod success
Jan 30 22:28:40.299: INFO: Pod "pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a" satisfied condition "success or failure"
Jan 30 22:28:40.304: INFO: Trying to get logs from node jerma-node pod pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a container env-test: 
STEP: delete the pod
Jan 30 22:28:40.414: INFO: Waiting for pod pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a to disappear
Jan 30 22:28:40.438: INFO: Pod pod-configmaps-57489e70-230e-4fc2-8277-bbc69d15438a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:28:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8379" for this suite.

• [SLOW TEST:8.473 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3502,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:28:40.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 30 22:28:40.649: INFO: Waiting up to 5m0s for pod "pod-32742e3f-aa27-410f-a447-5a5837262c63" in namespace "emptydir-8178" to be "success or failure"
Jan 30 22:28:40.679: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63": Phase="Pending", Reason="", readiness=false. Elapsed: 29.231377ms
Jan 30 22:28:42.687: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037599897s
Jan 30 22:28:44.693: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043550022s
Jan 30 22:28:46.700: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050582442s
Jan 30 22:28:48.707: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057496142s
STEP: Saw pod success
Jan 30 22:28:48.707: INFO: Pod "pod-32742e3f-aa27-410f-a447-5a5837262c63" satisfied condition "success or failure"
Jan 30 22:28:48.713: INFO: Trying to get logs from node jerma-node pod pod-32742e3f-aa27-410f-a447-5a5837262c63 container test-container: 
STEP: delete the pod
Jan 30 22:28:48.775: INFO: Waiting for pod pod-32742e3f-aa27-410f-a447-5a5837262c63 to disappear
Jan 30 22:28:48.790: INFO: Pod pod-32742e3f-aa27-410f-a447-5a5837262c63 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:28:48.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8178" for this suite.

• [SLOW TEST:8.237 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3519,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:28:48.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jan 30 22:28:48.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-639'
Jan 30 22:28:51.500: INFO: stderr: ""
Jan 30 22:28:51.500: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 22:28:51.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:28:51.796: INFO: stderr: ""
Jan 30 22:28:51.797: INFO: stdout: "update-demo-nautilus-tfd65 update-demo-nautilus-wgs5b "
Jan 30 22:28:51.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfd65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:28:52.003: INFO: stderr: ""
Jan 30 22:28:52.004: INFO: stdout: ""
Jan 30 22:28:52.004: INFO: update-demo-nautilus-tfd65 is created but not running
Jan 30 22:28:57.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:28:57.190: INFO: stderr: ""
Jan 30 22:28:57.190: INFO: stdout: "update-demo-nautilus-tfd65 update-demo-nautilus-wgs5b "
Jan 30 22:28:57.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfd65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:28:57.619: INFO: stderr: ""
Jan 30 22:28:57.619: INFO: stdout: ""
Jan 30 22:28:57.619: INFO: update-demo-nautilus-tfd65 is created but not running
Jan 30 22:29:02.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:02.787: INFO: stderr: ""
Jan 30 22:29:02.787: INFO: stdout: "update-demo-nautilus-tfd65 update-demo-nautilus-wgs5b "
Jan 30 22:29:02.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfd65 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:02.936: INFO: stderr: ""
Jan 30 22:29:02.937: INFO: stdout: "true"
Jan 30 22:29:02.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tfd65 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:03.137: INFO: stderr: ""
Jan 30 22:29:03.137: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:03.137: INFO: validating pod update-demo-nautilus-tfd65
Jan 30 22:29:03.149: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:03.149: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:03.149: INFO: update-demo-nautilus-tfd65 is verified up and running
Jan 30 22:29:03.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:03.252: INFO: stderr: ""
Jan 30 22:29:03.253: INFO: stdout: "true"
Jan 30 22:29:03.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:03.394: INFO: stderr: ""
Jan 30 22:29:03.394: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:03.394: INFO: validating pod update-demo-nautilus-wgs5b
Jan 30 22:29:03.402: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:03.402: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:03.402: INFO: update-demo-nautilus-wgs5b is verified up and running
STEP: scaling down the replication controller
Jan 30 22:29:03.404: INFO: scanned /root for discovery docs: 
Jan 30 22:29:03.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-639'
Jan 30 22:29:04.572: INFO: stderr: ""
Jan 30 22:29:04.572: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 22:29:04.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:04.683: INFO: stderr: ""
Jan 30 22:29:04.683: INFO: stdout: "update-demo-nautilus-tfd65 update-demo-nautilus-wgs5b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 30 22:29:09.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:09.811: INFO: stderr: ""
Jan 30 22:29:09.811: INFO: stdout: "update-demo-nautilus-tfd65 update-demo-nautilus-wgs5b "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 30 22:29:14.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:15.018: INFO: stderr: ""
Jan 30 22:29:15.018: INFO: stdout: "update-demo-nautilus-wgs5b "
Jan 30 22:29:15.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:15.133: INFO: stderr: ""
Jan 30 22:29:15.133: INFO: stdout: "true"
Jan 30 22:29:15.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:15.219: INFO: stderr: ""
Jan 30 22:29:15.220: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:15.220: INFO: validating pod update-demo-nautilus-wgs5b
Jan 30 22:29:15.226: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:15.226: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:15.226: INFO: update-demo-nautilus-wgs5b is verified up and running
STEP: scaling up the replication controller
Jan 30 22:29:15.229: INFO: scanned /root for discovery docs: 
Jan 30 22:29:15.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-639'
Jan 30 22:29:16.888: INFO: stderr: ""
Jan 30 22:29:16.888: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 22:29:16.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:17.093: INFO: stderr: ""
Jan 30 22:29:17.093: INFO: stdout: "update-demo-nautilus-wgs5b update-demo-nautilus-xm2pj "
Jan 30 22:29:17.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:17.226: INFO: stderr: ""
Jan 30 22:29:17.226: INFO: stdout: "true"
Jan 30 22:29:17.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:17.615: INFO: stderr: ""
Jan 30 22:29:17.615: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:17.615: INFO: validating pod update-demo-nautilus-wgs5b
Jan 30 22:29:17.623: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:17.623: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:17.623: INFO: update-demo-nautilus-wgs5b is verified up and running
Jan 30 22:29:17.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xm2pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:17.801: INFO: stderr: ""
Jan 30 22:29:17.801: INFO: stdout: ""
Jan 30 22:29:17.801: INFO: update-demo-nautilus-xm2pj is created but not running
Jan 30 22:29:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-639'
Jan 30 22:29:22.934: INFO: stderr: ""
Jan 30 22:29:22.934: INFO: stdout: "update-demo-nautilus-wgs5b update-demo-nautilus-xm2pj "
Jan 30 22:29:22.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:23.058: INFO: stderr: ""
Jan 30 22:29:23.059: INFO: stdout: "true"
Jan 30 22:29:23.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgs5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:23.178: INFO: stderr: ""
Jan 30 22:29:23.179: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:23.179: INFO: validating pod update-demo-nautilus-wgs5b
Jan 30 22:29:23.188: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:23.188: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:23.188: INFO: update-demo-nautilus-wgs5b is verified up and running
Jan 30 22:29:23.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xm2pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:23.325: INFO: stderr: ""
Jan 30 22:29:23.326: INFO: stdout: "true"
Jan 30 22:29:23.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xm2pj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-639'
Jan 30 22:29:23.456: INFO: stderr: ""
Jan 30 22:29:23.457: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:29:23.457: INFO: validating pod update-demo-nautilus-xm2pj
Jan 30 22:29:23.461: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:29:23.461: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:29:23.461: INFO: update-demo-nautilus-xm2pj is verified up and running
STEP: using delete to clean up resources
Jan 30 22:29:23.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-639'
Jan 30 22:29:23.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:29:23.582: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 30 22:29:23.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-639'
Jan 30 22:29:23.722: INFO: stderr: "No resources found in kubectl-639 namespace.\n"
Jan 30 22:29:23.722: INFO: stdout: ""
Jan 30 22:29:23.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-639 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 30 22:29:23.940: INFO: stderr: ""
Jan 30 22:29:23.941: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:29:23.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-639" for this suite.

• [SLOW TEST:35.179 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":213,"skipped":3541,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:29:23.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 30 22:29:24.149: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386304 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 22:29:24.149: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386304 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 30 22:29:34.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386356 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 30 22:29:34.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386356 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 30 22:29:44.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386382 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 22:29:44.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386382 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 30 22:29:54.173: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386406 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 22:29:54.173: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-a 3d834632-a232-49dd-b924-17e5d49ae664 5386406 0 2020-01-30 22:29:24 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 30 22:30:04.185: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-b 319f13c8-5ed5-4a3f-b86d-5f11b2fc9200 5386426 0 2020-01-30 22:30:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 22:30:04.185: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-b 319f13c8-5ed5-4a3f-b86d-5f11b2fc9200 5386426 0 2020-01-30 22:30:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 30 22:30:14.193: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-b 319f13c8-5ed5-4a3f-b86d-5f11b2fc9200 5386450 0 2020-01-30 22:30:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 22:30:14.193: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-518 /api/v1/namespaces/watch-518/configmaps/e2e-watch-test-configmap-b 319f13c8-5ed5-4a3f-b86d-5f11b2fc9200 5386450 0 2020-01-30 22:30:04 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:30:24.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-518" for this suite.

• [SLOW TEST:60.222 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":214,"skipped":3543,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:30:24.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-c38cd586-77cc-4adf-adb1-fbb1db555a6d
STEP: Creating secret with name s-test-opt-upd-c504d7c4-247a-4d2d-b136-a234c372e40e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c38cd586-77cc-4adf-adb1-fbb1db555a6d
STEP: Updating secret s-test-opt-upd-c504d7c4-247a-4d2d-b136-a234c372e40e
STEP: Creating secret with name s-test-opt-create-28dbb61b-c5b6-4630-a9d4-82dde65978f9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:32:03.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4535" for this suite.

• [SLOW TEST:99.597 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3588,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:32:03.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:32:04.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-2074" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":216,"skipped":3598,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:32:04.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jan 30 22:32:04.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:32:21.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-937" for this suite.

• [SLOW TEST:17.261 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":217,"skipped":3614,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:32:21.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jan 30 22:32:21.422: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 30 22:32:21.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:21.927: INFO: stderr: ""
Jan 30 22:32:21.928: INFO: stdout: "service/agnhost-slave created\n"
Jan 30 22:32:21.928: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 30 22:32:21.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:22.565: INFO: stderr: ""
Jan 30 22:32:22.565: INFO: stdout: "service/agnhost-master created\n"
Jan 30 22:32:22.566: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 30 22:32:22.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:23.024: INFO: stderr: ""
Jan 30 22:32:23.024: INFO: stdout: "service/frontend created\n"
Jan 30 22:32:23.025: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 30 22:32:23.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:23.432: INFO: stderr: ""
Jan 30 22:32:23.432: INFO: stdout: "deployment.apps/frontend created\n"
Jan 30 22:32:23.432: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 30 22:32:23.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:23.989: INFO: stderr: ""
Jan 30 22:32:23.989: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 30 22:32:23.990: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 30 22:32:23.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7620'
Jan 30 22:32:25.218: INFO: stderr: ""
Jan 30 22:32:25.218: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 30 22:32:25.218: INFO: Waiting for all frontend pods to be Running.
Jan 30 22:32:45.270: INFO: Waiting for frontend to serve content.
Jan 30 22:32:45.287: INFO: Trying to add a new entry to the guestbook.
Jan 30 22:32:45.311: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:32:50.334: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:32:55.350: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:00.369: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:05.393: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:10.462: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:15.485: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:20.505: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:25.525: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:30.551: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:35.570: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:40.594: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:45.620: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:50.634: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:33:55.651: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:00.689: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:05.712: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:10.733: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:15.751: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:20.774: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:25.795: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:30.813: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:35.835: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:40.849: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:45.871: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:50.896: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:34:55.923: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:00.944: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:05.959: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:10.972: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:15.984: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:21.000: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:26.013: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:31.058: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:36.076: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:41.094: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.44.0.0': Get http://10.44.0.0:6379/set?key=messages&value=TestEntry: dial tcp 10.44.0.0:6379: connect: connection refused

Jan 30 22:35:46.096: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5424e60, 0xc00143edc0, 0xc0031a94c0, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:417 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0029e6900)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0029e6900)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:112 +0x2b
testing.tRunner(0xc0029e6900, 0x4c30de8)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Jan 30 22:35:46.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:46.313: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:46.314: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 30 22:35:46.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:46.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:46.549: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 30 22:35:46.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:46.713: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:46.713: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 30 22:35:46.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:46.842: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:46.842: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 30 22:35:46.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:47.012: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:47.012: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 30 22:35:47.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7620'
Jan 30 22:35:47.308: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 30 22:35:47.308: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "kubectl-7620".
STEP: Found 37 events.
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-phn4b: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/agnhost-master-74c46fb7d4-phn4b to jerma-node
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-cnfwt: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/agnhost-slave-774cfc759f-cnfwt to jerma-node
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-qmjpq: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/agnhost-slave-774cfc759f-qmjpq to jerma-server-mvvl6gufaqub
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-2sbxp: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/frontend-6c5f89d5d4-2sbxp to jerma-node
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-gj8bl: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/frontend-6c5f89d5d4-gj8bl to jerma-node
Jan 30 22:35:48.324: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-x5btj: {default-scheduler } Scheduled: Successfully assigned kubectl-7620/frontend-6c5f89d5d4-x5btj to jerma-server-mvvl6gufaqub
Jan 30 22:35:48.324: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Jan 30 22:35:48.324: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-phn4b
Jan 30 22:35:48.324: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Jan 30 22:35:48.324: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-2sbxp
Jan 30 22:35:48.324: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-gj8bl
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:23 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-x5btj
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:25 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:25 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-qmjpq
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:25 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-cnfwt
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:30 +0000 UTC - event for frontend-6c5f89d5d4-x5btj: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:31 +0000 UTC - event for agnhost-slave-774cfc759f-qmjpq: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:31 +0000 UTC - event for frontend-6c5f89d5d4-2sbxp: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:33 +0000 UTC - event for frontend-6c5f89d5d4-gj8bl: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:35 +0000 UTC - event for agnhost-master-74c46fb7d4-phn4b: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:35 +0000 UTC - event for agnhost-slave-774cfc759f-cnfwt: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:35 +0000 UTC - event for agnhost-slave-774cfc759f-qmjpq: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:35 +0000 UTC - event for frontend-6c5f89d5d4-x5btj: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:36 +0000 UTC - event for agnhost-slave-774cfc759f-qmjpq: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:36 +0000 UTC - event for frontend-6c5f89d5d4-x5btj: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:38 +0000 UTC - event for agnhost-master-74c46fb7d4-phn4b: {kubelet jerma-node} Created: Created container master
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:38 +0000 UTC - event for agnhost-slave-774cfc759f-cnfwt: {kubelet jerma-node} Created: Created container slave
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:38 +0000 UTC - event for frontend-6c5f89d5d4-2sbxp: {kubelet jerma-node} Created: Created container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:38 +0000 UTC - event for frontend-6c5f89d5d4-gj8bl: {kubelet jerma-node} Created: Created container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:39 +0000 UTC - event for agnhost-master-74c46fb7d4-phn4b: {kubelet jerma-node} Started: Started container master
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:39 +0000 UTC - event for agnhost-slave-774cfc759f-cnfwt: {kubelet jerma-node} Started: Started container slave
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:39 +0000 UTC - event for frontend-6c5f89d5d4-2sbxp: {kubelet jerma-node} Started: Started container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:32:39 +0000 UTC - event for frontend-6c5f89d5d4-gj8bl: {kubelet jerma-node} Started: Started container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:35:47 +0000 UTC - event for agnhost-master-74c46fb7d4-phn4b: {kubelet jerma-node} Killing: Stopping container master
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:35:47 +0000 UTC - event for frontend-6c5f89d5d4-2sbxp: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:35:47 +0000 UTC - event for frontend-6c5f89d5d4-gj8bl: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Jan 30 22:35:48.325: INFO: At 2020-01-30 22:35:47 +0000 UTC - event for frontend-6c5f89d5d4-x5btj: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend
Jan 30 22:35:48.598: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:35:48.598: INFO: agnhost-master-74c46fb7d4-phn4b  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: agnhost-slave-774cfc759f-cnfwt   jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:25 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: agnhost-slave-774cfc759f-qmjpq   jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:25 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: frontend-6c5f89d5d4-2sbxp        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: frontend-6c5f89d5d4-gj8bl        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: frontend-6c5f89d5d4-x5btj        jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:32:23 +0000 UTC  }]
Jan 30 22:35:48.598: INFO: 
Jan 30 22:35:48.603: INFO: 
Logging node info for node jerma-node
Jan 30 22:35:48.638: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 5387074 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:24 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-30 22:33:24 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 30 22:35:48.639: INFO: 
Logging kubelet events for node jerma-node
Jan 30 22:35:48.743: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Jan 30 22:35:48.789: INFO: frontend-6c5f89d5d4-gj8bl started at 2020-01-30 22:32:23 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:48.789: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan 30 22:35:48.789: INFO: frontend-6c5f89d5d4-2sbxp started at 2020-01-30 22:32:23 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:48.789: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan 30 22:35:48.789: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:48.789: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:35:48.789: INFO: agnhost-slave-774cfc759f-cnfwt started at 2020-01-30 22:32:26 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:48.790: INFO: 	Container slave ready: true, restart count 0
Jan 30 22:35:48.790: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Jan 30 22:35:48.790: INFO: 	Container weave ready: true, restart count 1
Jan 30 22:35:48.790: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 22:35:48.790: INFO: agnhost-master-74c46fb7d4-phn4b started at 2020-01-30 22:32:25 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:48.790: INFO: 	Container master ready: true, restart count 0
W0130 22:35:48.796116       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 22:35:48.851: INFO: 
Latency metrics for node jerma-node
Jan 30 22:35:48.851: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Jan 30 22:35:50.260: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 5387106 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-01-30 22:33:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-01-30 22:33:38 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Jan 30 22:35:50.261: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Jan 30 22:35:50.268: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Jan 30 22:35:50.519: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.519: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 30 22:35:50.519: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container etcd ready: true, restart count 1
Jan 30 22:35:50.520: INFO: frontend-6c5f89d5d4-x5btj started at 2020-01-30 22:32:23 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container guestbook-frontend ready: true, restart count 0
Jan 30 22:35:50.520: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:35:50.520: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container coredns ready: true, restart count 0
Jan 30 22:35:50.520: INFO: agnhost-slave-774cfc759f-qmjpq started at 2020-01-30 22:32:26 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container slave ready: true, restart count 0
Jan 30 22:35:50.520: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 30 22:35:50.520: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 30 22:35:50.520: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container weave ready: true, restart count 0
Jan 30 22:35:50.520: INFO: 	Container weave-npc ready: true, restart count 0
Jan 30 22:35:50.520: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Jan 30 22:35:50.520: INFO: 	Container kube-scheduler ready: true, restart count 4
W0130 22:35:50.630939       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 22:35:51.039: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Jan 30 22:35:51.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7620" for this suite.

• Failure [209.880 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721

    Jan 30 22:35:46.096: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":217,"skipped":3630,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:35:51.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3032
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-3032
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3032
Jan 30 22:35:53.316: INFO: Found 0 stateful pods, waiting for 1
Jan 30 22:36:03.324: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 30 22:36:03.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 22:36:03.826: INFO: stderr: "I0130 22:36:03.574046    2901 log.go:172] (0xc0003bedc0) (0xc0006e99a0) Create stream\nI0130 22:36:03.574450    2901 log.go:172] (0xc0003bedc0) (0xc0006e99a0) Stream added, broadcasting: 1\nI0130 22:36:03.579974    2901 log.go:172] (0xc0003bedc0) Reply frame received for 1\nI0130 22:36:03.580043    2901 log.go:172] (0xc0003bedc0) (0xc0006e9b80) Create stream\nI0130 22:36:03.580065    2901 log.go:172] (0xc0003bedc0) (0xc0006e9b80) Stream added, broadcasting: 3\nI0130 22:36:03.581980    2901 log.go:172] (0xc0003bedc0) Reply frame received for 3\nI0130 22:36:03.582028    2901 log.go:172] (0xc0003bedc0) (0xc00095c000) Create stream\nI0130 22:36:03.582039    2901 log.go:172] (0xc0003bedc0) (0xc00095c000) Stream added, broadcasting: 5\nI0130 22:36:03.583695    2901 log.go:172] (0xc0003bedc0) Reply frame received for 5\nI0130 22:36:03.675663    2901 log.go:172] (0xc0003bedc0) Data frame received for 5\nI0130 22:36:03.675755    2901 log.go:172] (0xc00095c000) (5) Data frame handling\nI0130 22:36:03.675789    2901 log.go:172] (0xc00095c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 22:36:03.698199    2901 log.go:172] (0xc0003bedc0) Data frame received for 3\nI0130 22:36:03.698328    2901 log.go:172] (0xc0006e9b80) (3) Data frame handling\nI0130 22:36:03.698376    2901 log.go:172] (0xc0006e9b80) (3) Data frame sent\nI0130 22:36:03.796740    2901 log.go:172] (0xc0003bedc0) Data frame received for 1\nI0130 22:36:03.797372    2901 log.go:172] (0xc0006e99a0) (1) Data frame handling\nI0130 22:36:03.797549    2901 log.go:172] (0xc0006e99a0) (1) Data frame sent\nI0130 22:36:03.797811    2901 log.go:172] (0xc0003bedc0) (0xc00095c000) Stream removed, broadcasting: 5\nI0130 22:36:03.798502    2901 log.go:172] (0xc0003bedc0) (0xc0006e99a0) Stream removed, broadcasting: 1\nI0130 22:36:03.798771    2901 log.go:172] (0xc0003bedc0) (0xc0006e9b80) Stream removed, broadcasting: 3\nI0130 22:36:03.798930    2901 log.go:172] (0xc0003bedc0) Go away received\nI0130 22:36:03.800244    2901 log.go:172] (0xc0003bedc0) (0xc0006e99a0) Stream removed, broadcasting: 1\nI0130 22:36:03.800282    2901 log.go:172] (0xc0003bedc0) (0xc0006e9b80) Stream removed, broadcasting: 3\nI0130 22:36:03.800295    2901 log.go:172] (0xc0003bedc0) (0xc00095c000) Stream removed, broadcasting: 5\n"
Jan 30 22:36:03.826: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 22:36:03.826: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 22:36:03.841: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 30 22:36:13.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 22:36:13.854: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 22:36:13.882: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 30 22:36:13.883: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:03 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:13.883: INFO: ss-1              Pending         []
Jan 30 22:36:13.883: INFO: 
Jan 30 22:36:13.883: INFO: StatefulSet ss has not reached scale 3, at 2
Jan 30 22:36:15.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987244121s
Jan 30 22:36:16.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.341726495s
Jan 30 22:36:17.755: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.127461292s
Jan 30 22:36:18.762: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.115415116s
Jan 30 22:36:20.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.10865178s
Jan 30 22:36:21.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.128653287s
Jan 30 22:36:22.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.118969692s
Jan 30 22:36:23.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 100.553266ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3032
Jan 30 22:36:24.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:36:25.188: INFO: stderr: "I0130 22:36:25.022147    2925 log.go:172] (0xc000a3e0b0) (0xc000718320) Create stream\nI0130 22:36:25.022392    2925 log.go:172] (0xc000a3e0b0) (0xc000718320) Stream added, broadcasting: 1\nI0130 22:36:25.025514    2925 log.go:172] (0xc000a3e0b0) Reply frame received for 1\nI0130 22:36:25.025544    2925 log.go:172] (0xc000a3e0b0) (0xc0007183c0) Create stream\nI0130 22:36:25.025552    2925 log.go:172] (0xc000a3e0b0) (0xc0007183c0) Stream added, broadcasting: 3\nI0130 22:36:25.027099    2925 log.go:172] (0xc000a3e0b0) Reply frame received for 3\nI0130 22:36:25.027132    2925 log.go:172] (0xc000a3e0b0) (0xc0007d6000) Create stream\nI0130 22:36:25.027148    2925 log.go:172] (0xc000a3e0b0) (0xc0007d6000) Stream added, broadcasting: 5\nI0130 22:36:25.028385    2925 log.go:172] (0xc000a3e0b0) Reply frame received for 5\nI0130 22:36:25.088290    2925 log.go:172] (0xc000a3e0b0) Data frame received for 3\nI0130 22:36:25.088434    2925 log.go:172] (0xc0007183c0) (3) Data frame handling\nI0130 22:36:25.088491    2925 log.go:172] (0xc0007183c0) (3) Data frame sent\nI0130 22:36:25.088556    2925 log.go:172] (0xc000a3e0b0) Data frame received for 5\nI0130 22:36:25.088588    2925 log.go:172] (0xc0007d6000) (5) Data frame handling\nI0130 22:36:25.088616    2925 log.go:172] (0xc0007d6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 22:36:25.174737    2925 log.go:172] (0xc000a3e0b0) Data frame received for 1\nI0130 22:36:25.175031    2925 log.go:172] (0xc000a3e0b0) (0xc0007d6000) Stream removed, broadcasting: 5\nI0130 22:36:25.175450    2925 log.go:172] (0xc000718320) (1) Data frame handling\nI0130 22:36:25.175751    2925 log.go:172] (0xc000718320) (1) Data frame sent\nI0130 22:36:25.175851    2925 log.go:172] (0xc000a3e0b0) (0xc0007183c0) Stream removed, broadcasting: 3\nI0130 22:36:25.175932    2925 log.go:172] (0xc000a3e0b0) (0xc000718320) Stream removed, broadcasting: 1\nI0130 22:36:25.175955    2925 log.go:172] (0xc000a3e0b0) Go away received\nI0130 22:36:25.178161    2925 log.go:172] (0xc000a3e0b0) (0xc000718320) Stream removed, broadcasting: 1\nI0130 22:36:25.178177    2925 log.go:172] (0xc000a3e0b0) (0xc0007183c0) Stream removed, broadcasting: 3\nI0130 22:36:25.178184    2925 log.go:172] (0xc000a3e0b0) (0xc0007d6000) Stream removed, broadcasting: 5\n"
Jan 30 22:36:25.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 22:36:25.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 22:36:25.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:36:25.591: INFO: stderr: "I0130 22:36:25.416456    2945 log.go:172] (0xc000b374a0) (0xc0008f83c0) Create stream\nI0130 22:36:25.416662    2945 log.go:172] (0xc000b374a0) (0xc0008f83c0) Stream added, broadcasting: 1\nI0130 22:36:25.422497    2945 log.go:172] (0xc000b374a0) Reply frame received for 1\nI0130 22:36:25.423148    2945 log.go:172] (0xc000b374a0) (0xc0009e0320) Create stream\nI0130 22:36:25.423252    2945 log.go:172] (0xc000b374a0) (0xc0009e0320) Stream added, broadcasting: 3\nI0130 22:36:25.434073    2945 log.go:172] (0xc000b374a0) Reply frame received for 3\nI0130 22:36:25.434162    2945 log.go:172] (0xc000b374a0) (0xc0009e0000) Create stream\nI0130 22:36:25.434178    2945 log.go:172] (0xc000b374a0) (0xc0009e0000) Stream added, broadcasting: 5\nI0130 22:36:25.435331    2945 log.go:172] (0xc000b374a0) Reply frame received for 5\nI0130 22:36:25.484374    2945 log.go:172] (0xc000b374a0) Data frame received for 5\nI0130 22:36:25.484455    2945 log.go:172] (0xc0009e0000) (5) Data frame handling\nI0130 22:36:25.484531    2945 log.go:172] (0xc0009e0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0130 22:36:25.486187    2945 log.go:172] (0xc000b374a0) Data frame received for 3\nI0130 22:36:25.486244    2945 log.go:172] (0xc0009e0320) (3) Data frame handling\nI0130 22:36:25.486256    2945 log.go:172] (0xc0009e0320) (3) Data frame sent\nI0130 22:36:25.486288    2945 log.go:172] (0xc000b374a0) Data frame received for 5\nI0130 22:36:25.486295    2945 log.go:172] (0xc0009e0000) (5) Data frame handling\nI0130 22:36:25.486301    2945 log.go:172] (0xc0009e0000) (5) Data frame sent\nI0130 22:36:25.486308    2945 log.go:172] (0xc000b374a0) Data frame received for 5\nI0130 22:36:25.486318    2945 log.go:172] (0xc0009e0000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0130 22:36:25.486365    2945 log.go:172] (0xc0009e0000) (5) Data frame sent\nI0130 22:36:25.572147    2945 log.go:172] (0xc000b374a0) (0xc0009e0320) Stream removed, broadcasting: 3\nI0130 22:36:25.572364    2945 log.go:172] (0xc000b374a0) Data frame received for 1\nI0130 22:36:25.572435    2945 log.go:172] (0xc000b374a0) (0xc0009e0000) Stream removed, broadcasting: 5\nI0130 22:36:25.572511    2945 log.go:172] (0xc0008f83c0) (1) Data frame handling\nI0130 22:36:25.572552    2945 log.go:172] (0xc0008f83c0) (1) Data frame sent\nI0130 22:36:25.572574    2945 log.go:172] (0xc000b374a0) (0xc0008f83c0) Stream removed, broadcasting: 1\nI0130 22:36:25.572595    2945 log.go:172] (0xc000b374a0) Go away received\nI0130 22:36:25.574233    2945 log.go:172] (0xc000b374a0) (0xc0008f83c0) Stream removed, broadcasting: 1\nI0130 22:36:25.574261    2945 log.go:172] (0xc000b374a0) (0xc0009e0320) Stream removed, broadcasting: 3\nI0130 22:36:25.574276    2945 log.go:172] (0xc000b374a0) (0xc0009e0000) Stream removed, broadcasting: 5\n"
Jan 30 22:36:25.591: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 22:36:25.591: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 22:36:25.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:36:25.915: INFO: stderr: "I0130 22:36:25.746884    2968 log.go:172] (0xc000971550) (0xc0008de820) Create stream\nI0130 22:36:25.747116    2968 log.go:172] (0xc000971550) (0xc0008de820) Stream added, broadcasting: 1\nI0130 22:36:25.756008    2968 log.go:172] (0xc000971550) Reply frame received for 1\nI0130 22:36:25.756088    2968 log.go:172] (0xc000971550) (0xc0008de000) Create stream\nI0130 22:36:25.756102    2968 log.go:172] (0xc000971550) (0xc0008de000) Stream added, broadcasting: 3\nI0130 22:36:25.757799    2968 log.go:172] (0xc000971550) Reply frame received for 3\nI0130 22:36:25.757822    2968 log.go:172] (0xc000971550) (0xc0006186e0) Create stream\nI0130 22:36:25.757832    2968 log.go:172] (0xc000971550) (0xc0006186e0) Stream added, broadcasting: 5\nI0130 22:36:25.760390    2968 log.go:172] (0xc000971550) Reply frame received for 5\nI0130 22:36:25.835856    2968 log.go:172] (0xc000971550) Data frame received for 3\nI0130 22:36:25.836017    2968 log.go:172] (0xc0008de000) (3) Data frame handling\nI0130 22:36:25.836104    2968 log.go:172] (0xc0008de000) (3) Data frame sent\nI0130 22:36:25.836158    2968 log.go:172] (0xc000971550) Data frame received for 5\nI0130 22:36:25.836190    2968 log.go:172] (0xc0006186e0) (5) Data frame handling\nI0130 22:36:25.836212    2968 log.go:172] (0xc0006186e0) (5) Data frame sent\nI0130 22:36:25.836235    2968 log.go:172] (0xc000971550) Data frame received for 5\nI0130 22:36:25.836250    2968 log.go:172] (0xc0006186e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0130 22:36:25.836303    2968 log.go:172] (0xc0006186e0) (5) Data frame sent\nI0130 22:36:25.836331    2968 log.go:172] (0xc000971550) Data frame received for 5\nI0130 22:36:25.836346    2968 log.go:172] (0xc0006186e0) (5) Data frame handling\nI0130 22:36:25.836364    2968 log.go:172] (0xc0006186e0) (5) Data frame sent\n+ true\nI0130 22:36:25.901993    2968 log.go:172] (0xc000971550) Data frame received for 1\nI0130 22:36:25.902172    2968 log.go:172] (0xc0008de820) (1) Data frame handling\nI0130 22:36:25.902197    2968 log.go:172] (0xc0008de820) (1) Data frame sent\nI0130 22:36:25.902377    2968 log.go:172] (0xc000971550) (0xc0006186e0) Stream removed, broadcasting: 5\nI0130 22:36:25.902538    2968 log.go:172] (0xc000971550) (0xc0008de000) Stream removed, broadcasting: 3\nI0130 22:36:25.902743    2968 log.go:172] (0xc000971550) (0xc0008de820) Stream removed, broadcasting: 1\nI0130 22:36:25.902786    2968 log.go:172] (0xc000971550) Go away received\nI0130 22:36:25.904534    2968 log.go:172] (0xc000971550) (0xc0008de820) Stream removed, broadcasting: 1\nI0130 22:36:25.904575    2968 log.go:172] (0xc000971550) (0xc0008de000) Stream removed, broadcasting: 3\nI0130 22:36:25.904585    2968 log.go:172] (0xc000971550) (0xc0006186e0) Stream removed, broadcasting: 5\n"
Jan 30 22:36:25.916: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 30 22:36:25.916: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 30 22:36:25.921: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 30 22:36:35.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 22:36:35.928: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 30 22:36:35.928: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 30 22:36:35.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 22:36:36.251: INFO: stderr: "I0130 22:36:36.100718    2988 log.go:172] (0xc000a79550) (0xc00090a8c0) Create stream\nI0130 22:36:36.100869    2988 log.go:172] (0xc000a79550) (0xc00090a8c0) Stream added, broadcasting: 1\nI0130 22:36:36.112079    2988 log.go:172] (0xc000a79550) Reply frame received for 1\nI0130 22:36:36.112135    2988 log.go:172] (0xc000a79550) (0xc000662780) Create stream\nI0130 22:36:36.112147    2988 log.go:172] (0xc000a79550) (0xc000662780) Stream added, broadcasting: 3\nI0130 22:36:36.113138    2988 log.go:172] (0xc000a79550) Reply frame received for 3\nI0130 22:36:36.113162    2988 log.go:172] (0xc000a79550) (0xc000451540) Create stream\nI0130 22:36:36.113170    2988 log.go:172] (0xc000a79550) (0xc000451540) Stream added, broadcasting: 5\nI0130 22:36:36.114146    2988 log.go:172] (0xc000a79550) Reply frame received for 5\nI0130 22:36:36.172899    2988 log.go:172] (0xc000a79550) Data frame received for 5\nI0130 22:36:36.172959    2988 log.go:172] (0xc000451540) (5) Data frame handling\nI0130 22:36:36.172983    2988 log.go:172] (0xc000451540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 22:36:36.174684    2988 log.go:172] (0xc000a79550) Data frame received for 3\nI0130 22:36:36.174716    2988 log.go:172] (0xc000662780) (3) Data frame handling\nI0130 22:36:36.174736    2988 log.go:172] (0xc000662780) (3) Data frame sent\nI0130 22:36:36.238328    2988 log.go:172] (0xc000a79550) Data frame received for 1\nI0130 22:36:36.238506    2988 log.go:172] (0xc000a79550) (0xc000451540) Stream removed, broadcasting: 5\nI0130 22:36:36.238581    2988 log.go:172] (0xc00090a8c0) (1) Data frame handling\nI0130 22:36:36.238659    2988 log.go:172] (0xc00090a8c0) (1) Data frame sent\nI0130 22:36:36.238700    2988 log.go:172] (0xc000a79550) (0xc000662780) Stream removed, broadcasting: 3\nI0130 22:36:36.238793    2988 log.go:172] (0xc000a79550) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0130 22:36:36.238870    2988 log.go:172] (0xc000a79550) Go away received\nI0130 22:36:36.240121    2988 log.go:172] (0xc000a79550) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0130 22:36:36.240144    2988 log.go:172] (0xc000a79550) (0xc000662780) Stream removed, broadcasting: 3\nI0130 22:36:36.240157    2988 log.go:172] (0xc000a79550) (0xc000451540) Stream removed, broadcasting: 5\n"
Jan 30 22:36:36.252: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 22:36:36.252: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 22:36:36.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 22:36:36.652: INFO: stderr: "I0130 22:36:36.455526    3010 log.go:172] (0xc000a71b80) (0xc000a62a00) Create stream\nI0130 22:36:36.455692    3010 log.go:172] (0xc000a71b80) (0xc000a62a00) Stream added, broadcasting: 1\nI0130 22:36:36.458262    3010 log.go:172] (0xc000a71b80) Reply frame received for 1\nI0130 22:36:36.458353    3010 log.go:172] (0xc000a71b80) (0xc000a62aa0) Create stream\nI0130 22:36:36.458367    3010 log.go:172] (0xc000a71b80) (0xc000a62aa0) Stream added, broadcasting: 3\nI0130 22:36:36.459469    3010 log.go:172] (0xc000a71b80) Reply frame received for 3\nI0130 22:36:36.459508    3010 log.go:172] (0xc000a71b80) (0xc000958460) Create stream\nI0130 22:36:36.459517    3010 log.go:172] (0xc000a71b80) (0xc000958460) Stream added, broadcasting: 5\nI0130 22:36:36.460476    3010 log.go:172] (0xc000a71b80) Reply frame received for 5\nI0130 22:36:36.554150    3010 log.go:172] (0xc000a71b80) Data frame received for 5\nI0130 22:36:36.554334    3010 log.go:172] (0xc000958460) (5) Data frame handling\nI0130 22:36:36.554414    3010 log.go:172] (0xc000958460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 22:36:36.583513    3010 log.go:172] (0xc000a71b80) Data frame received for 3\nI0130 22:36:36.583542    3010 log.go:172] (0xc000a62aa0) (3) Data frame handling\nI0130 22:36:36.583555    3010 log.go:172] (0xc000a62aa0) (3) Data frame sent\nI0130 22:36:36.640334    3010 log.go:172] (0xc000a71b80) Data frame received for 1\nI0130 22:36:36.640416    3010 log.go:172] (0xc000a71b80) (0xc000958460) Stream removed, broadcasting: 5\nI0130 22:36:36.640466    3010 log.go:172] (0xc000a62a00) (1) Data frame handling\nI0130 22:36:36.640484    3010 log.go:172] (0xc000a62a00) (1) Data frame sent\nI0130 22:36:36.640509    3010 log.go:172] (0xc000a71b80) (0xc000a62aa0) Stream removed, broadcasting: 3\nI0130 22:36:36.640543    3010 log.go:172] (0xc000a71b80) (0xc000a62a00) Stream removed, broadcasting: 1\nI0130 22:36:36.640568    3010 log.go:172] (0xc000a71b80) Go away received\nI0130 22:36:36.641233    3010 log.go:172] (0xc000a71b80) (0xc000a62a00) Stream removed, broadcasting: 1\nI0130 22:36:36.641243    3010 log.go:172] (0xc000a71b80) (0xc000a62aa0) Stream removed, broadcasting: 3\nI0130 22:36:36.641249    3010 log.go:172] (0xc000a71b80) (0xc000958460) Stream removed, broadcasting: 5\n"
Jan 30 22:36:36.653: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 22:36:36.653: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 22:36:36.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 30 22:36:37.002: INFO: stderr: "I0130 22:36:36.795396    3031 log.go:172] (0xc0000f5550) (0xc0005abea0) Create stream\nI0130 22:36:36.795537    3031 log.go:172] (0xc0000f5550) (0xc0005abea0) Stream added, broadcasting: 1\nI0130 22:36:36.798924    3031 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0130 22:36:36.798964    3031 log.go:172] (0xc0000f5550) (0xc00053e780) Create stream\nI0130 22:36:36.798977    3031 log.go:172] (0xc0000f5550) (0xc00053e780) Stream added, broadcasting: 3\nI0130 22:36:36.800078    3031 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0130 22:36:36.800098    3031 log.go:172] (0xc0000f5550) (0xc000713540) Create stream\nI0130 22:36:36.800109    3031 log.go:172] (0xc0000f5550) (0xc000713540) Stream added, broadcasting: 5\nI0130 22:36:36.801551    3031 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0130 22:36:36.879437    3031 log.go:172] (0xc0000f5550) Data frame received for 5\nI0130 22:36:36.879737    3031 log.go:172] (0xc000713540) (5) Data frame handling\nI0130 22:36:36.879857    3031 log.go:172] (0xc000713540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0130 22:36:36.905819    3031 log.go:172] (0xc0000f5550) Data frame received for 3\nI0130 22:36:36.906004    3031 log.go:172] (0xc00053e780) (3) Data frame handling\nI0130 22:36:36.906070    3031 log.go:172] (0xc00053e780) (3) Data frame sent\nI0130 22:36:36.991274    3031 log.go:172] (0xc0000f5550) Data frame received for 1\nI0130 22:36:36.991355    3031 log.go:172] (0xc0000f5550) (0xc00053e780) Stream removed, broadcasting: 3\nI0130 22:36:36.991405    3031 log.go:172] (0xc0005abea0) (1) Data frame handling\nI0130 22:36:36.991444    3031 log.go:172] (0xc0005abea0) (1) Data frame sent\nI0130 22:36:36.991465    3031 log.go:172] (0xc0000f5550) (0xc000713540) Stream removed, broadcasting: 5\nI0130 22:36:36.991482    3031 log.go:172] (0xc0000f5550) (0xc0005abea0) Stream removed, broadcasting: 1\nI0130 22:36:36.991501    3031 log.go:172] (0xc0000f5550) Go away received\nI0130 22:36:36.993859    3031 log.go:172] (0xc0000f5550) (0xc0005abea0) Stream removed, broadcasting: 1\nI0130 22:36:36.993886    3031 log.go:172] (0xc0000f5550) (0xc00053e780) Stream removed, broadcasting: 3\nI0130 22:36:36.993897    3031 log.go:172] (0xc0000f5550) (0xc000713540) Stream removed, broadcasting: 5\n"
Jan 30 22:36:37.002: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 30 22:36:37.002: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 30 22:36:37.002: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 22:36:37.006: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 30 22:36:47.020: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 22:36:47.020: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 22:36:47.020: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 30 22:36:47.038: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:47.038: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:47.038: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:47.038: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:47.038: INFO: 
Jan 30 22:36:47.038: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:49.127: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:49.127: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:49.128: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:49.128: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:49.128: INFO: 
Jan 30 22:36:49.128: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:50.136: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:50.136: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:50.136: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:50.136: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:50.136: INFO: 
Jan 30 22:36:50.136: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:51.166: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:51.166: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:51.166: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:51.166: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:51.166: INFO: 
Jan 30 22:36:51.166: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:52.173: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:52.173: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:52.173: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:52.174: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:52.174: INFO: 
Jan 30 22:36:52.174: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:53.181: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 30 22:36:53.181: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:53.181: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:53.181: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:53.181: INFO: 
Jan 30 22:36:53.181: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 30 22:36:54.189: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 30 22:36:54.189: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:54.189: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:54.189: INFO: 
Jan 30 22:36:54.189: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 30 22:36:55.195: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 30 22:36:55.196: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:55.196: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:55.196: INFO: 
Jan 30 22:36:55.196: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 30 22:36:56.213: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 30 22:36:56.213: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:35:53 +0000 UTC  }]
Jan 30 22:36:56.213: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-30 22:36:13 +0000 UTC  }]
Jan 30 22:36:56.213: INFO: 
Jan 30 22:36:56.213: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3032
Jan 30 22:36:57.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:36:57.510: INFO: rc: 1
Jan 30 22:36:57.511: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 30 22:37:07.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:07.720: INFO: rc: 1
Jan 30 22:37:07.720: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:37:17.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:17.933: INFO: rc: 1
Jan 30 22:37:17.934: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:37:27.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:28.078: INFO: rc: 1
Jan 30 22:37:28.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:37:38.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:38.358: INFO: rc: 1
Jan 30 22:37:38.359: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:37:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:48.576: INFO: rc: 1
Jan 30 22:37:48.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:37:58.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:37:58.728: INFO: rc: 1
Jan 30 22:37:58.729: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:08.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:38:08.949: INFO: rc: 1
Jan 30 22:38:08.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:18.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:38:19.116: INFO: rc: 1
Jan 30 22:38:19.116: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:29.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:38:29.319: INFO: rc: 1
Jan 30 22:38:29.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:39.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:38:39.576: INFO: rc: 1
Jan 30 22:38:39.576: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:49.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:38:49.913: INFO: rc: 1
Jan 30 22:38:49.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:38:59.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:01.922: INFO: rc: 1
Jan 30 22:39:01.923: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:39:11.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:12.082: INFO: rc: 1
Jan 30 22:39:12.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:39:22.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:22.254: INFO: rc: 1
Jan 30 22:39:22.254: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:39:32.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:32.416: INFO: rc: 1
Jan 30 22:39:32.416: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:39:42.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:42.741: INFO: rc: 1
Jan 30 22:39:42.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:39:52.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:39:52.932: INFO: rc: 1
Jan 30 22:39:52.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:02.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:03.184: INFO: rc: 1
Jan 30 22:40:03.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:13.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:13.326: INFO: rc: 1
Jan 30 22:40:13.326: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:23.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:23.540: INFO: rc: 1
Jan 30 22:40:23.541: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:33.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:33.790: INFO: rc: 1
Jan 30 22:40:33.790: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:43.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:44.063: INFO: rc: 1
Jan 30 22:40:44.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:40:54.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:40:54.203: INFO: rc: 1
Jan 30 22:40:54.203: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:04.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:04.341: INFO: rc: 1
Jan 30 22:41:04.341: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:14.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:14.531: INFO: rc: 1
Jan 30 22:41:14.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:24.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:24.716: INFO: rc: 1
Jan 30 22:41:24.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:34.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:34.870: INFO: rc: 1
Jan 30 22:41:34.870: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:44.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:45.028: INFO: rc: 1
Jan 30 22:41:45.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:41:55.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:41:55.188: INFO: rc: 1
Jan 30 22:41:55.188: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan 30 22:42:05.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3032 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 30 22:42:05.363: INFO: rc: 1
Jan 30 22:42:05.363: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jan 30 22:42:05.363: INFO: Scaling statefulset ss to 0
Jan 30 22:42:05.389: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 30 22:42:05.393: INFO: Deleting all statefulset in ns statefulset-3032
Jan 30 22:42:05.424: INFO: Scaling statefulset ss to 0
Jan 30 22:42:05.434: INFO: Waiting for statefulset status.replicas updated to 0
Jan 30 22:42:05.437: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:05.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3032" for this suite.

• [SLOW TEST:374.277 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":218,"skipped":3671,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:05.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:42:05.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e" in namespace "projected-9290" to be "success or failure"
Jan 30 22:42:05.633: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.004829ms
Jan 30 22:42:07.641: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04593519s
Jan 30 22:42:09.649: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053676597s
Jan 30 22:42:11.655: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059172716s
Jan 30 22:42:13.665: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069488134s
STEP: Saw pod success
Jan 30 22:42:13.665: INFO: Pod "downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e" satisfied condition "success or failure"
Jan 30 22:42:13.671: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e container client-container: 
STEP: delete the pod
Jan 30 22:42:13.983: INFO: Waiting for pod downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e to disappear
Jan 30 22:42:13.992: INFO: Pod downwardapi-volume-5e5d7686-3933-4ab5-9bf9-d825f139753e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:13.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9290" for this suite.

• [SLOW TEST:8.525 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3674,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:14.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0130 22:42:16.293730       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 22:42:16.293: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:16.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7765" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":220,"skipped":3677,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:16.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-4727
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4727 to expose endpoints map[]
Jan 30 22:42:17.212: INFO: successfully validated that service multi-endpoint-test in namespace services-4727 exposes endpoints map[] (26.149951ms elapsed)
STEP: Creating pod pod1 in namespace services-4727
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4727 to expose endpoints map[pod1:[100]]
Jan 30 22:42:22.182: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.952588421s elapsed, will retry)
Jan 30 22:42:26.243: INFO: successfully validated that service multi-endpoint-test in namespace services-4727 exposes endpoints map[pod1:[100]] (9.014198586s elapsed)
STEP: Creating pod pod2 in namespace services-4727
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4727 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 30 22:42:30.641: INFO: Unexpected endpoints: found map[c74ce9ac-6889-4b5d-96ca-1e67d02c383d:[100]], expected map[pod1:[100] pod2:[101]] (4.390877299s elapsed, will retry)
Jan 30 22:42:33.691: INFO: successfully validated that service multi-endpoint-test in namespace services-4727 exposes endpoints map[pod1:[100] pod2:[101]] (7.441392636s elapsed)
STEP: Deleting pod pod1 in namespace services-4727
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4727 to expose endpoints map[pod2:[101]]
Jan 30 22:42:33.727: INFO: successfully validated that service multi-endpoint-test in namespace services-4727 exposes endpoints map[pod2:[101]] (25.859569ms elapsed)
STEP: Deleting pod pod2 in namespace services-4727
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4727 to expose endpoints map[]
Jan 30 22:42:33.808: INFO: successfully validated that service multi-endpoint-test in namespace services-4727 exposes endpoints map[] (69.513711ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:33.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4727" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.025 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":221,"skipped":3686,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:33.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:42:34.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4" in namespace "downward-api-9503" to be "success or failure"
Jan 30 22:42:34.048: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.517441ms
Jan 30 22:42:36.208: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167599592s
Jan 30 22:42:38.216: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175425242s
Jan 30 22:42:40.220: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179708606s
Jan 30 22:42:42.228: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187021484s
Jan 30 22:42:44.280: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239364151s
STEP: Saw pod success
Jan 30 22:42:44.280: INFO: Pod "downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4" satisfied condition "success or failure"
Jan 30 22:42:44.282: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4 container client-container: 
STEP: delete the pod
Jan 30 22:42:44.319: INFO: Waiting for pod downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4 to disappear
Jan 30 22:42:44.328: INFO: Pod downwardapi-volume-8e8d84f1-f13d-4595-9760-909e3dc337a4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:44.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9503" for this suite.

• [SLOW TEST:10.380 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3687,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:44.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 30 22:42:45.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 30 22:42:47.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:42:49.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 30 22:42:51.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716020965, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 30 22:42:54.448: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:42:54.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7460-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:42:55.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1666" for this suite.
STEP: Destroying namespace "webhook-1666-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.136 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":223,"skipped":3701,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:42:55.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-bbv8
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 22:42:55.579: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bbv8" in namespace "subpath-1152" to be "success or failure"
Jan 30 22:42:55.584: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.678508ms
Jan 30 22:42:57.592: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012872289s
Jan 30 22:42:59.599: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019272997s
Jan 30 22:43:01.608: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028267897s
Jan 30 22:43:03.621: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 8.04106426s
Jan 30 22:43:05.627: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 10.047166664s
Jan 30 22:43:07.660: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 12.080325551s
Jan 30 22:43:09.666: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 14.086869703s
Jan 30 22:43:11.671: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 16.091860105s
Jan 30 22:43:13.677: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 18.097934524s
Jan 30 22:43:15.691: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 20.111750973s
Jan 30 22:43:17.714: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 22.134198644s
Jan 30 22:43:19.738: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 24.158433036s
Jan 30 22:43:21.746: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 26.16659157s
Jan 30 22:43:23.753: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Running", Reason="", readiness=true. Elapsed: 28.173394196s
Jan 30 22:43:25.762: INFO: Pod "pod-subpath-test-downwardapi-bbv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.182418379s
STEP: Saw pod success
Jan 30 22:43:25.762: INFO: Pod "pod-subpath-test-downwardapi-bbv8" satisfied condition "success or failure"
Jan 30 22:43:25.772: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-bbv8 container test-container-subpath-downwardapi-bbv8: 
STEP: delete the pod
Jan 30 22:43:25.894: INFO: Waiting for pod pod-subpath-test-downwardapi-bbv8 to disappear
Jan 30 22:43:25.902: INFO: Pod pod-subpath-test-downwardapi-bbv8 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-bbv8
Jan 30 22:43:25.902: INFO: Deleting pod "pod-subpath-test-downwardapi-bbv8" in namespace "subpath-1152"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:43:25.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1152" for this suite.

• [SLOW TEST:30.466 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":224,"skipped":3755,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:43:25.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:43:26.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b" in namespace "projected-752" to be "success or failure"
Jan 30 22:43:26.168: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b": Phase="Pending", Reason="", readiness=false. Elapsed: 76.645723ms
Jan 30 22:43:28.176: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084585271s
Jan 30 22:43:30.183: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092249417s
Jan 30 22:43:32.194: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102999301s
Jan 30 22:43:34.199: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108257253s
STEP: Saw pod success
Jan 30 22:43:34.200: INFO: Pod "downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b" satisfied condition "success or failure"
Jan 30 22:43:34.203: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b container client-container: 
STEP: delete the pod
Jan 30 22:43:34.243: INFO: Waiting for pod downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b to disappear
Jan 30 22:43:34.270: INFO: Pod downwardapi-volume-5cd59162-a4db-4f88-bb88-bac2afc7241b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:43:34.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-752" for this suite.

• [SLOW TEST:8.339 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3762,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:43:34.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:43:34.430: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 30 22:43:37.516: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:43:38.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3776" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":226,"skipped":3781,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:43:38.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-1c981f61-6e07-4e6a-8c3b-14242f562b17
STEP: Creating a pod to test consume secrets
Jan 30 22:43:38.970: INFO: Waiting up to 5m0s for pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f" in namespace "secrets-6076" to be "success or failure"
Jan 30 22:43:39.008: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.980597ms
Jan 30 22:43:41.833: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863639509s
Jan 30 22:43:43.990: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.019839847s
Jan 30 22:43:46.298: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.328084097s
Jan 30 22:43:48.307: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.336744103s
Jan 30 22:43:50.315: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.345362807s
Jan 30 22:43:52.321: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.351174763s
STEP: Saw pod success
Jan 30 22:43:52.321: INFO: Pod "pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f" satisfied condition "success or failure"
Jan 30 22:43:52.323: INFO: Trying to get logs from node jerma-node pod pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f container secret-volume-test: 
STEP: delete the pod
Jan 30 22:43:52.609: INFO: Waiting for pod pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f to disappear
Jan 30 22:43:52.617: INFO: Pod pod-secrets-0cc48def-3553-4e8d-aa37-49d96174628f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:43:52.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6076" for this suite.

• [SLOW TEST:14.019 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3798,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:43:52.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jan 30 22:43:52.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1906 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 30 22:44:00.874: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0130 22:43:59.926497    3672 log.go:172] (0xc000b040b0) (0xc00081dae0) Create stream\nI0130 22:43:59.926924    3672 log.go:172] (0xc000b040b0) (0xc00081dae0) Stream added, broadcasting: 1\nI0130 22:43:59.931816    3672 log.go:172] (0xc000b040b0) Reply frame received for 1\nI0130 22:43:59.931966    3672 log.go:172] (0xc000b040b0) (0xc0005e2000) Create stream\nI0130 22:43:59.932008    3672 log.go:172] (0xc000b040b0) (0xc0005e2000) Stream added, broadcasting: 3\nI0130 22:43:59.933602    3672 log.go:172] (0xc000b040b0) Reply frame received for 3\nI0130 22:43:59.933640    3672 log.go:172] (0xc000b040b0) (0xc00081db80) Create stream\nI0130 22:43:59.933651    3672 log.go:172] (0xc000b040b0) (0xc00081db80) Stream added, broadcasting: 5\nI0130 22:43:59.935794    3672 log.go:172] (0xc000b040b0) Reply frame received for 5\nI0130 22:43:59.935873    3672 log.go:172] (0xc000b040b0) (0xc000638000) Create stream\nI0130 22:43:59.935898    3672 log.go:172] (0xc000b040b0) (0xc000638000) Stream added, broadcasting: 7\nI0130 22:43:59.938256    3672 log.go:172] (0xc000b040b0) Reply frame received for 7\nI0130 22:43:59.939075    3672 log.go:172] (0xc0005e2000) (3) Writing data frame\nI0130 22:43:59.939431    3672 log.go:172] (0xc0005e2000) (3) Writing data frame\nI0130 22:43:59.943121    3672 log.go:172] (0xc000b040b0) Data frame received for 5\nI0130 22:43:59.943139    3672 log.go:172] (0xc00081db80) (5) Data frame handling\nI0130 22:43:59.943166    3672 log.go:172] (0xc00081db80) (5) Data frame sent\nI0130 22:43:59.947196    3672 log.go:172] (0xc000b040b0) Data frame received for 5\nI0130 22:43:59.947212    3672 log.go:172] (0xc00081db80) (5) Data frame handling\nI0130 22:43:59.947219    3672 log.go:172] (0xc00081db80) (5) Data frame sent\nI0130 22:44:00.797499    3672 log.go:172] (0xc000b040b0) (0xc0005e2000) Stream removed, broadcasting: 3\nI0130 22:44:00.797840    3672 log.go:172] (0xc000b040b0) Data frame received for 1\nI0130 22:44:00.797872    3672 log.go:172] (0xc00081dae0) (1) Data frame handling\nI0130 22:44:00.797907    3672 log.go:172] (0xc00081dae0) (1) Data frame sent\nI0130 22:44:00.797932    3672 log.go:172] (0xc000b040b0) (0xc00081dae0) Stream removed, broadcasting: 1\nI0130 22:44:00.799311    3672 log.go:172] (0xc000b040b0) (0xc00081db80) Stream removed, broadcasting: 5\nI0130 22:44:00.799457    3672 log.go:172] (0xc000b040b0) (0xc000638000) Stream removed, broadcasting: 7\nI0130 22:44:00.799551    3672 log.go:172] (0xc000b040b0) (0xc00081dae0) Stream removed, broadcasting: 1\nI0130 22:44:00.799566    3672 log.go:172] (0xc000b040b0) (0xc0005e2000) Stream removed, broadcasting: 3\nI0130 22:44:00.799583    3672 log.go:172] (0xc000b040b0) (0xc00081db80) Stream removed, broadcasting: 5\nI0130 22:44:00.799683    3672 log.go:172] (0xc000b040b0) Go away received\nI0130 22:44:00.799840    3672 log.go:172] (0xc000b040b0) (0xc000638000) Stream removed, broadcasting: 7\n"
Jan 30 22:44:00.875: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:02.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1906" for this suite.

• [SLOW TEST:10.293 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":228,"skipped":3828,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:02.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 30 22:44:02.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5095'
Jan 30 22:44:03.172: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 30 22:44:03.172: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
Jan 30 22:44:03.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5095'
Jan 30 22:44:03.364: INFO: stderr: ""
Jan 30 22:44:03.364: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:03.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5095" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":229,"skipped":3832,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:03.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jan 30 22:44:03.434: INFO: Waiting up to 5m0s for pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5" in namespace "var-expansion-2464" to be "success or failure"
Jan 30 22:44:03.437: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398855ms
Jan 30 22:44:05.445: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010260437s
Jan 30 22:44:07.452: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017344277s
Jan 30 22:44:09.469: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034305621s
Jan 30 22:44:11.475: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040434571s
Jan 30 22:44:13.483: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048258897s
STEP: Saw pod success
Jan 30 22:44:13.483: INFO: Pod "var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5" satisfied condition "success or failure"
Jan 30 22:44:13.487: INFO: Trying to get logs from node jerma-node pod var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5 container dapi-container: 
STEP: delete the pod
Jan 30 22:44:13.838: INFO: Waiting for pod var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5 to disappear
Jan 30 22:44:13.841: INFO: Pod var-expansion-10b8d819-fa4e-4048-a655-1a407e2882f5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:13.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2464" for this suite.

• [SLOW TEST:10.480 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3832,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:13.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:44:14.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 30 22:44:16.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1851 create -f -'
Jan 30 22:44:19.773: INFO: stderr: ""
Jan 30 22:44:19.773: INFO: stdout: "e2e-test-crd-publish-openapi-4642-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 30 22:44:19.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1851 delete e2e-test-crd-publish-openapi-4642-crds test-cr'
Jan 30 22:44:20.058: INFO: stderr: ""
Jan 30 22:44:20.058: INFO: stdout: "e2e-test-crd-publish-openapi-4642-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 30 22:44:20.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1851 apply -f -'
Jan 30 22:44:20.432: INFO: stderr: ""
Jan 30 22:44:20.433: INFO: stdout: "e2e-test-crd-publish-openapi-4642-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 30 22:44:20.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1851 delete e2e-test-crd-publish-openapi-4642-crds test-cr'
Jan 30 22:44:20.704: INFO: stderr: ""
Jan 30 22:44:20.704: INFO: stdout: "e2e-test-crd-publish-openapi-4642-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 30 22:44:20.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4642-crds'
Jan 30 22:44:21.002: INFO: stderr: ""
Jan 30 22:44:21.002: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4642-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:24.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1851" for this suite.

• [SLOW TEST:10.264 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":231,"skipped":3850,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:24.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 30 22:44:24.202: INFO: Waiting up to 5m0s for pod "pod-f763c40b-6acd-4346-9c83-f6694e256960" in namespace "emptydir-1204" to be "success or failure"
Jan 30 22:44:24.205: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.758984ms
Jan 30 22:44:26.214: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011258986s
Jan 30 22:44:28.223: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020192326s
Jan 30 22:44:30.231: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027990138s
Jan 30 22:44:32.240: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037757147s
STEP: Saw pod success
Jan 30 22:44:32.241: INFO: Pod "pod-f763c40b-6acd-4346-9c83-f6694e256960" satisfied condition "success or failure"
Jan 30 22:44:32.246: INFO: Trying to get logs from node jerma-node pod pod-f763c40b-6acd-4346-9c83-f6694e256960 container test-container: 
STEP: delete the pod
Jan 30 22:44:32.280: INFO: Waiting for pod pod-f763c40b-6acd-4346-9c83-f6694e256960 to disappear
Jan 30 22:44:32.283: INFO: Pod pod-f763c40b-6acd-4346-9c83-f6694e256960 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:32.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1204" for this suite.

• [SLOW TEST:8.172 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3850,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:32.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jan 30 22:44:32.375: INFO: Waiting up to 5m0s for pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4" in namespace "downward-api-6369" to be "success or failure"
Jan 30 22:44:32.386: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.601869ms
Jan 30 22:44:34.391: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01571704s
Jan 30 22:44:36.399: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023748428s
Jan 30 22:44:38.411: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03557808s
Jan 30 22:44:40.419: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043712176s
STEP: Saw pod success
Jan 30 22:44:40.419: INFO: Pod "downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4" satisfied condition "success or failure"
Jan 30 22:44:40.425: INFO: Trying to get logs from node jerma-node pod downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4 container dapi-container: 
STEP: delete the pod
Jan 30 22:44:40.468: INFO: Waiting for pod downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4 to disappear
Jan 30 22:44:40.540: INFO: Pod downward-api-c1dd7418-1f2a-42de-ae6a-4434a0067ee4 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:44:40.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6369" for this suite.

• [SLOW TEST:8.274 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3850,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:44:40.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-73284918-3066-48e8-b5c2-d3507375129d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-73284918-3066-48e8-b5c2-d3507375129d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:45:53.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6856" for this suite.

• [SLOW TEST:73.203 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3855,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:45:53.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-e6d598da-8392-472e-9417-7021284f130c
STEP: Creating a pod to test consume configMaps
Jan 30 22:45:53.979: INFO: Waiting up to 5m0s for pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635" in namespace "configmap-2559" to be "success or failure"
Jan 30 22:45:54.018: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635": Phase="Pending", Reason="", readiness=false. Elapsed: 38.960818ms
Jan 30 22:45:56.025: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045987762s
Jan 30 22:45:58.032: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053066132s
Jan 30 22:46:00.039: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060125398s
Jan 30 22:46:02.050: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071109717s
STEP: Saw pod success
Jan 30 22:46:02.050: INFO: Pod "pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635" satisfied condition "success or failure"
Jan 30 22:46:02.056: INFO: Trying to get logs from node jerma-node pod pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635 container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:46:02.246: INFO: Waiting for pod pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635 to disappear
Jan 30 22:46:02.306: INFO: Pod pod-configmaps-116f082b-4e44-4f4a-a564-f6d44a7c7635 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:46:02.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2559" for this suite.

• [SLOW TEST:8.550 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3892,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:46:02.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:46:02.492: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b" in namespace "downward-api-5236" to be "success or failure"
Jan 30 22:46:02.588: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b": Phase="Pending", Reason="", readiness=false. Elapsed: 95.343879ms
Jan 30 22:46:04.597: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10391687s
Jan 30 22:46:06.604: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111814629s
Jan 30 22:46:08.620: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127610504s
Jan 30 22:46:10.638: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14556784s
STEP: Saw pod success
Jan 30 22:46:10.639: INFO: Pod "downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b" satisfied condition "success or failure"
Jan 30 22:46:10.646: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b container client-container: 
STEP: delete the pod
Jan 30 22:46:10.724: INFO: Waiting for pod downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b to disappear
Jan 30 22:46:10.752: INFO: Pod downwardapi-volume-e80b543f-50a6-4982-9c7b-83b9726cc45b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:46:10.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5236" for this suite.

• [SLOW TEST:8.453 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3894,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:46:10.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0130 22:46:51.784740       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 30 22:46:51.784: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:46:51.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3500" for this suite.

• [SLOW TEST:41.027 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":237,"skipped":3909,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:46:51.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:46:52.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6717" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3946,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:46:52.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-12ffc920-865b-45fd-b78b-d412d58e565b
STEP: Creating a pod to test consume configMaps
Jan 30 22:46:52.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e" in namespace "configmap-1443" to be "success or failure"
Jan 30 22:46:52.296: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183863ms
Jan 30 22:46:54.301: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011151879s
Jan 30 22:46:56.329: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039189636s
Jan 30 22:46:58.371: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080491186s
Jan 30 22:47:01.129: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838585015s
Jan 30 22:47:03.343: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.052463673s
Jan 30 22:47:06.381: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.090268547s
Jan 30 22:47:08.580: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.290204132s
Jan 30 22:47:10.674: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.383960574s
STEP: Saw pod success
Jan 30 22:47:10.674: INFO: Pod "pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e" satisfied condition "success or failure"
Jan 30 22:47:10.679: INFO: Trying to get logs from node jerma-node pod pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:47:10.980: INFO: Waiting for pod pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e to disappear
Jan 30 22:47:11.008: INFO: Pod pod-configmaps-49bd5840-72ba-4baf-b404-85f24feee49e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:47:11.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1443" for this suite.

• [SLOW TEST:19.075 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":4002,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:47:11.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 30 22:47:20.504: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:47:20.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4582" for this suite.

• [SLOW TEST:9.454 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":4056,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:47:20.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 30 22:47:20.803: INFO: Waiting up to 5m0s for pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493" in namespace "emptydir-7384" to be "success or failure"
Jan 30 22:47:20.818: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493": Phase="Pending", Reason="", readiness=false. Elapsed: 14.357604ms
Jan 30 22:47:22.826: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022881289s
Jan 30 22:47:24.833: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029457025s
Jan 30 22:47:26.841: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037582828s
Jan 30 22:47:28.849: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045639709s
STEP: Saw pod success
Jan 30 22:47:28.849: INFO: Pod "pod-017f7cfc-1f56-42c3-8792-09e3404cc493" satisfied condition "success or failure"
Jan 30 22:47:28.855: INFO: Trying to get logs from node jerma-node pod pod-017f7cfc-1f56-42c3-8792-09e3404cc493 container test-container: 
STEP: delete the pod
Jan 30 22:47:28.903: INFO: Waiting for pod pod-017f7cfc-1f56-42c3-8792-09e3404cc493 to disappear
Jan 30 22:47:28.908: INFO: Pod pod-017f7cfc-1f56-42c3-8792-09e3404cc493 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:47:28.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7384" for this suite.

• [SLOW TEST:8.273 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4059,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:47:28.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-cls9
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 22:47:29.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cls9" in namespace "subpath-4092" to be "success or failure"
Jan 30 22:47:29.148: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.973422ms
Jan 30 22:47:31.155: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03627978s
Jan 30 22:47:33.162: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043482724s
Jan 30 22:47:35.170: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051223941s
Jan 30 22:47:37.175: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 8.057138525s
Jan 30 22:47:39.181: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 10.062923186s
Jan 30 22:47:41.189: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 12.071055521s
Jan 30 22:47:43.198: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 14.07943745s
Jan 30 22:47:45.203: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 16.085114019s
Jan 30 22:47:47.213: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 18.094392803s
Jan 30 22:47:49.223: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 20.104469746s
Jan 30 22:47:51.232: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 22.113848116s
Jan 30 22:47:53.242: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 24.123709125s
Jan 30 22:47:55.251: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 26.132785201s
Jan 30 22:47:57.258: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Running", Reason="", readiness=true. Elapsed: 28.139997451s
Jan 30 22:47:59.268: INFO: Pod "pod-subpath-test-secret-cls9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.149221785s
STEP: Saw pod success
Jan 30 22:47:59.268: INFO: Pod "pod-subpath-test-secret-cls9" satisfied condition "success or failure"
Jan 30 22:47:59.273: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-cls9 container test-container-subpath-secret-cls9: 
STEP: delete the pod
Jan 30 22:47:59.453: INFO: Waiting for pod pod-subpath-test-secret-cls9 to disappear
Jan 30 22:47:59.470: INFO: Pod pod-subpath-test-secret-cls9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-cls9
Jan 30 22:47:59.471: INFO: Deleting pod "pod-subpath-test-secret-cls9" in namespace "subpath-4092"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:47:59.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4092" for this suite.

• [SLOW TEST:30.577 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":242,"skipped":4117,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:47:59.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-b58c0728-5686-4342-9522-56b69799095d
STEP: Creating a pod to test consume configMaps
Jan 30 22:47:59.676: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632" in namespace "projected-7661" to be "success or failure"
Jan 30 22:47:59.684: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047868ms
Jan 30 22:48:01.691: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014879201s
Jan 30 22:48:03.697: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021199413s
Jan 30 22:48:05.705: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028722451s
Jan 30 22:48:07.712: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036183388s
STEP: Saw pod success
Jan 30 22:48:07.712: INFO: Pod "pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632" satisfied condition "success or failure"
Jan 30 22:48:07.716: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 22:48:07.768: INFO: Waiting for pod pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632 to disappear
Jan 30 22:48:07.776: INFO: Pod pod-projected-configmaps-d76d2aa9-b18e-4fc0-bb2f-94e4ad092632 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:48:07.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7661" for this suite.

• [SLOW TEST:8.297 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4181,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:48:07.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-dc1b6af5-0aec-46ae-a184-f36920080391
STEP: Creating a pod to test consume secrets
Jan 30 22:48:07.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4" in namespace "projected-4184" to be "success or failure"
Jan 30 22:48:07.954: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 13.655959ms
Jan 30 22:48:09.962: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021100555s
Jan 30 22:48:11.970: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029780222s
Jan 30 22:48:13.975: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034155005s
Jan 30 22:48:15.980: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039342777s
STEP: Saw pod success
Jan 30 22:48:15.980: INFO: Pod "pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4" satisfied condition "success or failure"
Jan 30 22:48:15.983: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4 container projected-secret-volume-test: 
STEP: delete the pod
Jan 30 22:48:16.038: INFO: Waiting for pod pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4 to disappear
Jan 30 22:48:16.100: INFO: Pod pod-projected-secrets-7582eceb-0e38-4505-9176-3cd1f87ca0d4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:48:16.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4184" for this suite.

• [SLOW TEST:8.325 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4183,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:48:16.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Jan 30 22:48:16.819: INFO: created pod pod-service-account-defaultsa
Jan 30 22:48:16.819: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 30 22:48:16.827: INFO: created pod pod-service-account-mountsa
Jan 30 22:48:16.827: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 30 22:48:16.859: INFO: created pod pod-service-account-nomountsa
Jan 30 22:48:16.860: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 30 22:48:16.888: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 30 22:48:16.888: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 30 22:48:17.070: INFO: created pod pod-service-account-mountsa-mountspec
Jan 30 22:48:17.070: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 30 22:48:17.094: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 30 22:48:17.095: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 30 22:48:17.117: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 30 22:48:17.117: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 30 22:48:17.169: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 30 22:48:17.169: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 30 22:48:17.250: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 30 22:48:17.250: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:48:17.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2421" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":245,"skipped":4186,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:48:19.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 2.213.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.213.2_udp@PTR;check="$$(dig +tcp +noall +answer +search 2.213.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.213.2_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 2.213.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.213.2_udp@PTR;check="$$(dig +tcp +noall +answer +search 2.213.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.213.2_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:48:44.117: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.121: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.126: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.135: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.139: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.155: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.190: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.201: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.204: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.209: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.213: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.220: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.226: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:44.249: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:48:49.261: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.267: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.280: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.283: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.286: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.290: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.336: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.340: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.362: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.366: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.370: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:49.409: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:48:54.268: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.285: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.293: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.300: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.323: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.332: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.339: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.385: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.389: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.397: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.413: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.426: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.433: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:54.477: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:48:59.259: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.266: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.282: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.287: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.292: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.296: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.329: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.333: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.337: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.346: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.350: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.353: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.376: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:48:59.398: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:49:04.260: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.265: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.270: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.274: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.285: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.287: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.317: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.324: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.328: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.331: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.334: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.337: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.340: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.345: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:04.366: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:49:09.294: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.303: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.307: INFO: Unable to read wheezy_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.315: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.324: INFO: Unable to read wheezy_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.332: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.336: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.340: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.364: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.367: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.391: INFO: Unable to read jessie_udp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-2 from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.409: INFO: Unable to read jessie_udp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.416: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2.svc from pod dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592: the server could not find the requested resource (get pods dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592)
Jan 30 22:49:09.432: INFO: Lookups using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2 wheezy_tcp@dns-test-service.dns-2 wheezy_udp@dns-test-service.dns-2.svc wheezy_tcp@dns-test-service.dns-2.svc wheezy_udp@_http._tcp.dns-test-service.dns-2.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2 jessie_tcp@dns-test-service.dns-2 jessie_udp@dns-test-service.dns-2.svc jessie_tcp@dns-test-service.dns-2.svc jessie_udp@_http._tcp.dns-test-service.dns-2.svc jessie_tcp@_http._tcp.dns-test-service.dns-2.svc]

Jan 30 22:49:14.417: INFO: DNS probes using dns-2/dns-test-6ee8fe9b-4644-4f10-b824-09ae8db1a592 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:49:14.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2" for this suite.

• [SLOW TEST:55.664 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":246,"skipped":4196,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:49:14.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-2l9m
STEP: Creating a pod to test atomic-volume-subpath
Jan 30 22:49:14.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2l9m" in namespace "subpath-640" to be "success or failure"
Jan 30 22:49:14.825: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069649ms
Jan 30 22:49:16.831: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010153013s
Jan 30 22:49:18.837: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016134689s
Jan 30 22:49:20.857: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036541328s
Jan 30 22:49:22.868: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047622174s
Jan 30 22:49:24.876: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 10.055339328s
Jan 30 22:49:26.883: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 12.062919269s
Jan 30 22:49:28.888: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 14.067430384s
Jan 30 22:49:30.899: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 16.078085709s
Jan 30 22:49:32.934: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 18.113275663s
Jan 30 22:49:34.943: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 20.122123258s
Jan 30 22:49:36.950: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 22.129242592s
Jan 30 22:49:38.958: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 24.137172747s
Jan 30 22:49:40.964: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 26.143380245s
Jan 30 22:49:43.252: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Running", Reason="", readiness=true. Elapsed: 28.431764368s
Jan 30 22:49:45.258: INFO: Pod "pod-subpath-test-projected-2l9m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.43737016s
STEP: Saw pod success
Jan 30 22:49:45.258: INFO: Pod "pod-subpath-test-projected-2l9m" satisfied condition "success or failure"
Jan 30 22:49:45.269: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-2l9m container test-container-subpath-projected-2l9m: 
STEP: delete the pod
Jan 30 22:49:45.310: INFO: Waiting for pod pod-subpath-test-projected-2l9m to disappear
Jan 30 22:49:45.313: INFO: Pod pod-subpath-test-projected-2l9m no longer exists
STEP: Deleting pod pod-subpath-test-projected-2l9m
Jan 30 22:49:45.313: INFO: Deleting pod "pod-subpath-test-projected-2l9m" in namespace "subpath-640"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:49:45.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-640" for this suite.

• [SLOW TEST:30.710 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":247,"skipped":4213,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:49:45.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-a9468436-2f50-42e0-8dd0-833af1c46153
STEP: Creating a pod to test consume configMaps
Jan 30 22:49:45.556: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787" in namespace "projected-1386" to be "success or failure"
Jan 30 22:49:45.560: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709415ms
Jan 30 22:49:47.566: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010034684s
Jan 30 22:49:49.572: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015937144s
Jan 30 22:49:51.578: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022452948s
Jan 30 22:49:53.584: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028466837s
STEP: Saw pod success
Jan 30 22:49:53.584: INFO: Pod "pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787" satisfied condition "success or failure"
Jan 30 22:49:53.588: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 22:49:53.662: INFO: Waiting for pod pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787 to disappear
Jan 30 22:49:53.724: INFO: Pod pod-projected-configmaps-3124afac-f5ca-4f0c-9cfa-8900d12da787 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:49:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1386" for this suite.

• [SLOW TEST:8.356 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4218,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:49:53.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 30 22:50:01.952: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 30 22:50:17.098: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:17.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9675" for this suite.

• [SLOW TEST:23.375 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":249,"skipped":4254,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:17.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-4060bb8a-adfc-4096-b9b8-fb3c738faf9a
STEP: Creating a pod to test consume configMaps
Jan 30 22:50:17.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe" in namespace "configmap-7310" to be "success or failure"
Jan 30 22:50:17.226: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe": Phase="Pending", Reason="", readiness=false. Elapsed: 43.993035ms
Jan 30 22:50:19.239: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056676423s
Jan 30 22:50:21.248: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066509305s
Jan 30 22:50:23.253: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070697927s
Jan 30 22:50:25.259: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07693285s
STEP: Saw pod success
Jan 30 22:50:25.259: INFO: Pod "pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe" satisfied condition "success or failure"
Jan 30 22:50:25.262: INFO: Trying to get logs from node jerma-node pod pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe container configmap-volume-test: 
STEP: delete the pod
Jan 30 22:50:25.329: INFO: Waiting for pod pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe to disappear
Jan 30 22:50:25.363: INFO: Pod pod-configmaps-82358256-64e5-4304-8511-d5fd45a9c4fe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:25.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7310" for this suite.

• [SLOW TEST:8.259 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4284,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:25.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 30 22:50:25.532: INFO: Waiting up to 5m0s for pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f" in namespace "emptydir-76" to be "success or failure"
Jan 30 22:50:25.550: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.7236ms
Jan 30 22:50:27.556: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023812194s
Jan 30 22:50:29.576: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044102761s
Jan 30 22:50:31.583: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050678877s
Jan 30 22:50:33.591: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058728577s
STEP: Saw pod success
Jan 30 22:50:33.591: INFO: Pod "pod-7d0a097d-f675-4343-8f80-a25129fc901f" satisfied condition "success or failure"
Jan 30 22:50:33.596: INFO: Trying to get logs from node jerma-node pod pod-7d0a097d-f675-4343-8f80-a25129fc901f container test-container: 
STEP: delete the pod
Jan 30 22:50:33.832: INFO: Waiting for pod pod-7d0a097d-f675-4343-8f80-a25129fc901f to disappear
Jan 30 22:50:33.843: INFO: Pod pod-7d0a097d-f675-4343-8f80-a25129fc901f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:33.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-76" for this suite.

• [SLOW TEST:8.482 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4286,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:33.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:50:34.212: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5" in namespace "security-context-test-7412" to be "success or failure"
Jan 30 22:50:34.221: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.809519ms
Jan 30 22:50:36.230: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017995957s
Jan 30 22:50:38.253: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041352572s
Jan 30 22:50:40.261: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049411263s
Jan 30 22:50:42.269: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057155133s
Jan 30 22:50:42.269: INFO: Pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5" satisfied condition "success or failure"
Jan 30 22:50:42.308: INFO: Got logs for pod "busybox-privileged-false-83f35861-3f30-490c-af4d-1fc1271760f5": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:42.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7412" for this suite.

• [SLOW TEST:8.465 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4302,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:42.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jan 30 22:50:42.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2" in namespace "downward-api-4628" to be "success or failure"
Jan 30 22:50:42.474: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.468955ms
Jan 30 22:50:44.488: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03910008s
Jan 30 22:50:46.499: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049373952s
Jan 30 22:50:48.510: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061176395s
Jan 30 22:50:50.529: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079731185s
STEP: Saw pod success
Jan 30 22:50:50.529: INFO: Pod "downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2" satisfied condition "success or failure"
Jan 30 22:50:50.533: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2 container client-container: 
STEP: delete the pod
Jan 30 22:50:50.596: INFO: Waiting for pod downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2 to disappear
Jan 30 22:50:50.605: INFO: Pod downwardapi-volume-82e641a6-f4db-4179-9175-bb5312541ac2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:50.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4628" for this suite.

• [SLOW TEST:8.290 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4302,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:50.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:50:57.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4986" for this suite.

• [SLOW TEST:7.187 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":254,"skipped":4313,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:50:57.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 30 22:50:57.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1951'
Jan 30 22:50:58.082: INFO: stderr: ""
Jan 30 22:50:58.083: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 30 22:51:08.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1951 -o json'
Jan 30 22:51:08.303: INFO: stderr: ""
Jan 30 22:51:08.303: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-30T22:50:58Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-1951\",\n        \"resourceVersion\": \"5390995\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1951/pods/e2e-test-httpd-pod\",\n        \"uid\": \"0f4b17c2-f25d-4679-b12f-c639e0eafed3\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-dlcvc\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-dlcvc\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-dlcvc\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T22:50:58Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T22:51:04Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T22:51:04Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-30T22:50:58Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://fc9c707411ceeb7d6a8832487b16fac8a18617037049bd0c741f66b64f2b07cb\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-30T22:51:03Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.2\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.2\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-30T22:50:58Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 30 22:51:08.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1951'
Jan 30 22:51:08.949: INFO: stderr: ""
Jan 30 22:51:08.950: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Jan 30 22:51:08.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1951'
Jan 30 22:51:15.341: INFO: stderr: ""
Jan 30 22:51:15.341: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:51:15.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1951" for this suite.

• [SLOW TEST:17.551 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":255,"skipped":4313,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:51:15.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 30 22:51:22.745: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:51:22.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9792" for this suite.

• [SLOW TEST:7.432 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4324,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:51:22.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:51:23.071: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 30 22:51:23.116: INFO: Number of nodes with available pods: 0
Jan 30 22:51:23.116: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 30 22:51:23.239: INFO: Number of nodes with available pods: 0
Jan 30 22:51:23.239: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:24.246: INFO: Number of nodes with available pods: 0
Jan 30 22:51:24.246: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:25.250: INFO: Number of nodes with available pods: 0
Jan 30 22:51:25.250: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:26.248: INFO: Number of nodes with available pods: 0
Jan 30 22:51:26.248: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:27.248: INFO: Number of nodes with available pods: 0
Jan 30 22:51:27.248: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:28.245: INFO: Number of nodes with available pods: 0
Jan 30 22:51:28.245: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:29.246: INFO: Number of nodes with available pods: 0
Jan 30 22:51:29.246: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:30.248: INFO: Number of nodes with available pods: 1
Jan 30 22:51:30.248: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 30 22:51:30.309: INFO: Number of nodes with available pods: 1
Jan 30 22:51:30.310: INFO: Number of running nodes: 0, number of available pods: 1
Jan 30 22:51:31.317: INFO: Number of nodes with available pods: 0
Jan 30 22:51:31.317: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 30 22:51:31.338: INFO: Number of nodes with available pods: 0
Jan 30 22:51:31.339: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:32.344: INFO: Number of nodes with available pods: 0
Jan 30 22:51:32.344: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:33.345: INFO: Number of nodes with available pods: 0
Jan 30 22:51:33.345: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:34.366: INFO: Number of nodes with available pods: 0
Jan 30 22:51:34.366: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:35.347: INFO: Number of nodes with available pods: 0
Jan 30 22:51:35.348: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:36.349: INFO: Number of nodes with available pods: 0
Jan 30 22:51:36.349: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:37.345: INFO: Number of nodes with available pods: 0
Jan 30 22:51:37.346: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:38.346: INFO: Number of nodes with available pods: 0
Jan 30 22:51:38.346: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:39.343: INFO: Number of nodes with available pods: 0
Jan 30 22:51:39.343: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:40.437: INFO: Number of nodes with available pods: 0
Jan 30 22:51:40.437: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:41.345: INFO: Number of nodes with available pods: 0
Jan 30 22:51:41.345: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:42.345: INFO: Number of nodes with available pods: 0
Jan 30 22:51:42.346: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:51:43.348: INFO: Number of nodes with available pods: 1
Jan 30 22:51:43.348: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4514, will wait for the garbage collector to delete the pods
Jan 30 22:51:43.429: INFO: Deleting DaemonSet.extensions daemon-set took: 14.476462ms
Jan 30 22:51:43.730: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.811813ms
Jan 30 22:51:52.737: INFO: Number of nodes with available pods: 0
Jan 30 22:51:52.738: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 22:51:52.741: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4514/daemonsets","resourceVersion":"5391207"},"items":null}

Jan 30 22:51:52.743: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4514/pods","resourceVersion":"5391207"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:51:52.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4514" for this suite.

• [SLOW TEST:30.002 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":257,"skipped":4326,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:51:52.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-d02ced72-8b92-4e5b-ae6e-a09864fbff14
STEP: Creating a pod to test consume configMaps
Jan 30 22:51:53.091: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e" in namespace "projected-8879" to be "success or failure"
Jan 30 22:51:53.102: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.840828ms
Jan 30 22:51:55.115: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024114234s
Jan 30 22:51:57.121: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029505912s
Jan 30 22:51:59.127: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03635875s
Jan 30 22:52:01.138: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047165105s
STEP: Saw pod success
Jan 30 22:52:01.138: INFO: Pod "pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e" satisfied condition "success or failure"
Jan 30 22:52:01.144: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e container projected-configmap-volume-test: 
STEP: delete the pod
Jan 30 22:52:01.260: INFO: Waiting for pod pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e to disappear
Jan 30 22:52:01.284: INFO: Pod pod-projected-configmaps-79d25422-8e49-419a-8a53-e3008a00613e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:52:01.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8879" for this suite.

• [SLOW TEST:8.506 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4337,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:52:01.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:52:07.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6703" for this suite.

• [SLOW TEST:6.035 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":259,"skipped":4342,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:52:07.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jan 30 22:52:07.478: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:52:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2746" for this suite.

• [SLOW TEST:10.373 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":260,"skipped":4360,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:52:17.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jan 30 22:52:17.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6141'
Jan 30 22:52:18.440: INFO: stderr: ""
Jan 30 22:52:18.440: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 22:52:18.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6141'
Jan 30 22:52:18.685: INFO: stderr: ""
Jan 30 22:52:18.685: INFO: stdout: "update-demo-nautilus-k74ls update-demo-nautilus-s5mkz "
Jan 30 22:52:18.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:18.780: INFO: stderr: ""
Jan 30 22:52:18.780: INFO: stdout: ""
Jan 30 22:52:18.780: INFO: update-demo-nautilus-k74ls is created but not running
Jan 30 22:52:23.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6141'
Jan 30 22:52:24.726: INFO: stderr: ""
Jan 30 22:52:24.726: INFO: stdout: "update-demo-nautilus-k74ls update-demo-nautilus-s5mkz "
Jan 30 22:52:24.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:25.113: INFO: stderr: ""
Jan 30 22:52:25.113: INFO: stdout: ""
Jan 30 22:52:25.113: INFO: update-demo-nautilus-k74ls is created but not running
Jan 30 22:52:30.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6141'
Jan 30 22:52:30.340: INFO: stderr: ""
Jan 30 22:52:30.341: INFO: stdout: "update-demo-nautilus-k74ls update-demo-nautilus-s5mkz "
Jan 30 22:52:30.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74ls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:30.459: INFO: stderr: ""
Jan 30 22:52:30.459: INFO: stdout: "true"
Jan 30 22:52:30.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k74ls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:30.608: INFO: stderr: ""
Jan 30 22:52:30.609: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:52:30.609: INFO: validating pod update-demo-nautilus-k74ls
Jan 30 22:52:30.628: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:52:30.628: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:52:30.628: INFO: update-demo-nautilus-k74ls is verified up and running
Jan 30 22:52:30.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5mkz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:30.739: INFO: stderr: ""
Jan 30 22:52:30.739: INFO: stdout: "true"
Jan 30 22:52:30.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s5mkz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:52:30.873: INFO: stderr: ""
Jan 30 22:52:30.873: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 30 22:52:30.873: INFO: validating pod update-demo-nautilus-s5mkz
Jan 30 22:52:30.882: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 30 22:52:30.882: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 30 22:52:30.882: INFO: update-demo-nautilus-s5mkz is verified up and running
STEP: rolling-update to new replication controller
Jan 30 22:52:30.885: INFO: scanned /root for discovery docs: 
Jan 30 22:52:30.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6141'
Jan 30 22:53:01.400: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 30 22:53:01.401: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 30 22:53:01.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6141'
Jan 30 22:53:01.645: INFO: stderr: ""
Jan 30 22:53:01.645: INFO: stdout: "update-demo-kitten-f8rcg update-demo-kitten-r7vvm update-demo-nautilus-s5mkz "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan 30 22:53:06.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6141'
Jan 30 22:53:06.782: INFO: stderr: ""
Jan 30 22:53:06.782: INFO: stdout: "update-demo-kitten-f8rcg update-demo-kitten-r7vvm "
Jan 30 22:53:06.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f8rcg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:53:06.956: INFO: stderr: ""
Jan 30 22:53:06.956: INFO: stdout: "true"
Jan 30 22:53:06.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-f8rcg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:53:07.118: INFO: stderr: ""
Jan 30 22:53:07.118: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 30 22:53:07.118: INFO: validating pod update-demo-kitten-f8rcg
Jan 30 22:53:07.124: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 30 22:53:07.124: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 30 22:53:07.124: INFO: update-demo-kitten-f8rcg is verified up and running
Jan 30 22:53:07.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r7vvm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:53:07.240: INFO: stderr: ""
Jan 30 22:53:07.240: INFO: stdout: "true"
Jan 30 22:53:07.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-r7vvm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6141'
Jan 30 22:53:07.358: INFO: stderr: ""
Jan 30 22:53:07.358: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 30 22:53:07.358: INFO: validating pod update-demo-kitten-r7vvm
Jan 30 22:53:07.365: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 30 22:53:07.365: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 30 22:53:07.365: INFO: update-demo-kitten-r7vvm is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:53:07.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6141" for this suite.

• [SLOW TEST:49.666 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":261,"skipped":4372,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:53:07.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:53:07.443: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18" in namespace "security-context-test-2988" to be "success or failure"
Jan 30 22:53:07.511: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18": Phase="Pending", Reason="", readiness=false. Elapsed: 68.305468ms
Jan 30 22:53:09.520: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076622853s
Jan 30 22:53:11.530: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086857671s
Jan 30 22:53:13.994: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55041462s
Jan 30 22:53:16.332: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.88917351s
Jan 30 22:53:16.332: INFO: Pod "busybox-user-65534-1a12b2a3-6496-4db5-ab1f-e060b7089c18" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:53:16.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2988" for this suite.

• [SLOW TEST:8.972 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4372,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:53:16.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-85d8f7c7-9b26-4cdf-8cad-6d415a525384
STEP: Creating a pod to test consume secrets
Jan 30 22:53:16.815: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec" in namespace "projected-928" to be "success or failure"
Jan 30 22:53:16.837: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Pending", Reason="", readiness=false. Elapsed: 22.280737ms
Jan 30 22:53:18.870: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055026701s
Jan 30 22:53:20.883: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067699226s
Jan 30 22:53:22.893: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07830559s
Jan 30 22:53:24.912: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097428534s
Jan 30 22:53:26.920: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105353357s
STEP: Saw pod success
Jan 30 22:53:26.921: INFO: Pod "pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec" satisfied condition "success or failure"
Jan 30 22:53:26.927: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec container secret-volume-test: 
STEP: delete the pod
Jan 30 22:53:27.072: INFO: Waiting for pod pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec to disappear
Jan 30 22:53:27.082: INFO: Pod pod-projected-secrets-1305ee8e-5b2d-422b-9aff-06e690bcc2ec no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:53:27.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-928" for this suite.

• [SLOW TEST:10.753 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4376,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:53:27.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 30 22:53:33.347: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:53:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7048" for this suite.

• [SLOW TEST:7.371 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":264,"skipped":4389,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:53:34.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jan 30 22:53:47.395: INFO: Successfully updated pod "labelsupdated52c45d2-707f-46fe-ae21-1549ad032dfa"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:53:51.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2086" for this suite.

• [SLOW TEST:17.069 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4399,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:53:51.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:54:02.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7270" for this suite.

• [SLOW TEST:11.305 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":266,"skipped":4400,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:54:02.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:54:03.009: INFO: Create a RollingUpdate DaemonSet
Jan 30 22:54:03.014: INFO: Check that daemon pods launch on every node of the cluster
Jan 30 22:54:03.027: INFO: Number of nodes with available pods: 0
Jan 30 22:54:03.027: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:04.328: INFO: Number of nodes with available pods: 0
Jan 30 22:54:04.328: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:05.041: INFO: Number of nodes with available pods: 0
Jan 30 22:54:05.041: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:06.039: INFO: Number of nodes with available pods: 0
Jan 30 22:54:06.039: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:07.046: INFO: Number of nodes with available pods: 0
Jan 30 22:54:07.046: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:09.075: INFO: Number of nodes with available pods: 0
Jan 30 22:54:09.075: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:10.754: INFO: Number of nodes with available pods: 0
Jan 30 22:54:10.754: INFO: Node jerma-node is running more than one daemon pod
Jan 30 22:54:11.180: INFO: Number of nodes with available pods: 1
Jan 30 22:54:11.180: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 22:54:12.037: INFO: Number of nodes with available pods: 1
Jan 30 22:54:12.037: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 30 22:54:13.039: INFO: Number of nodes with available pods: 2
Jan 30 22:54:13.039: INFO: Number of running nodes: 2, number of available pods: 2
Jan 30 22:54:13.039: INFO: Update the DaemonSet to trigger a rollout
Jan 30 22:54:13.047: INFO: Updating DaemonSet daemon-set
Jan 30 22:54:20.079: INFO: Roll back the DaemonSet before rollout is complete
Jan 30 22:54:20.085: INFO: Updating DaemonSet daemon-set
Jan 30 22:54:20.085: INFO: Make sure DaemonSet rollback is complete
Jan 30 22:54:20.098: INFO: Wrong image for pod: daemon-set-wnpv8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 22:54:20.098: INFO: Pod daemon-set-wnpv8 is not available
Jan 30 22:54:21.179: INFO: Wrong image for pod: daemon-set-wnpv8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 30 22:54:21.180: INFO: Pod daemon-set-wnpv8 is not available
Jan 30 22:54:22.123: INFO: Pod daemon-set-9hcmt is not available
Jan 30 22:54:23.203: INFO: Pod daemon-set-9hcmt is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3006, will wait for the garbage collector to delete the pods
Jan 30 22:54:23.688: INFO: Deleting DaemonSet.extensions daemon-set took: 6.717059ms
Jan 30 22:54:24.488: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.447602ms
Jan 30 22:54:32.392: INFO: Number of nodes with available pods: 0
Jan 30 22:54:32.392: INFO: Number of running nodes: 0, number of available pods: 0
Jan 30 22:54:32.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3006/daemonsets","resourceVersion":"5392088"},"items":null}

Jan 30 22:54:32.397: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3006/pods","resourceVersion":"5392088"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:54:32.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3006" for this suite.

• [SLOW TEST:29.566 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":267,"skipped":4410,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:54:32.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-5ba6b369-b27b-4f32-ba71-6b4c6338b949
STEP: Creating a pod to test consume secrets
Jan 30 22:54:32.593: INFO: Waiting up to 5m0s for pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532" in namespace "secrets-587" to be "success or failure"
Jan 30 22:54:32.613: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532": Phase="Pending", Reason="", readiness=false. Elapsed: 19.092593ms
Jan 30 22:54:34.618: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024626613s
Jan 30 22:54:36.625: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031254567s
Jan 30 22:54:38.635: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041151064s
Jan 30 22:54:40.641: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047630295s
STEP: Saw pod success
Jan 30 22:54:40.641: INFO: Pod "pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532" satisfied condition "success or failure"
Jan 30 22:54:40.644: INFO: Trying to get logs from node jerma-node pod pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532 container secret-volume-test: 
STEP: delete the pod
Jan 30 22:54:40.691: INFO: Waiting for pod pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532 to disappear
Jan 30 22:54:40.695: INFO: Pod pod-secrets-ba7c42f1-c946-4eb8-a779-6cca733bd532 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:54:40.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-587" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4410,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:54:40.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-9799
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 30 22:54:40.810: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 30 22:55:12.998: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:55:12.998: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:55:13.071716       8 log.go:172] (0xc001c4a160) (0xc0016ee320) Create stream
I0130 22:55:13.071865       8 log.go:172] (0xc001c4a160) (0xc0016ee320) Stream added, broadcasting: 1
I0130 22:55:13.075556       8 log.go:172] (0xc001c4a160) Reply frame received for 1
I0130 22:55:13.075671       8 log.go:172] (0xc001c4a160) (0xc00240c3c0) Create stream
I0130 22:55:13.075682       8 log.go:172] (0xc001c4a160) (0xc00240c3c0) Stream added, broadcasting: 3
I0130 22:55:13.077605       8 log.go:172] (0xc001c4a160) Reply frame received for 3
I0130 22:55:13.077648       8 log.go:172] (0xc001c4a160) (0xc00244fb80) Create stream
I0130 22:55:13.077660       8 log.go:172] (0xc001c4a160) (0xc00244fb80) Stream added, broadcasting: 5
I0130 22:55:13.079620       8 log.go:172] (0xc001c4a160) Reply frame received for 5
I0130 22:55:13.159424       8 log.go:172] (0xc001c4a160) Data frame received for 3
I0130 22:55:13.159517       8 log.go:172] (0xc00240c3c0) (3) Data frame handling
I0130 22:55:13.159555       8 log.go:172] (0xc00240c3c0) (3) Data frame sent
I0130 22:55:13.218471       8 log.go:172] (0xc001c4a160) Data frame received for 1
I0130 22:55:13.218541       8 log.go:172] (0xc0016ee320) (1) Data frame handling
I0130 22:55:13.218601       8 log.go:172] (0xc0016ee320) (1) Data frame sent
I0130 22:55:13.218630       8 log.go:172] (0xc001c4a160) (0xc0016ee320) Stream removed, broadcasting: 1
I0130 22:55:13.218679       8 log.go:172] (0xc001c4a160) (0xc00240c3c0) Stream removed, broadcasting: 3
I0130 22:55:13.219539       8 log.go:172] (0xc001c4a160) (0xc00244fb80) Stream removed, broadcasting: 5
I0130 22:55:13.219584       8 log.go:172] (0xc001c4a160) Go away received
I0130 22:55:13.219646       8 log.go:172] (0xc001c4a160) (0xc0016ee320) Stream removed, broadcasting: 1
I0130 22:55:13.219670       8 log.go:172] (0xc001c4a160) (0xc00240c3c0) Stream removed, broadcasting: 3
I0130 22:55:13.219685       8 log.go:172] (0xc001c4a160) (0xc00244fb80) Stream removed, broadcasting: 5
Jan 30 22:55:13.219: INFO: Waiting for responses: map[]
Jan 30 22:55:13.226: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 30 22:55:13.226: INFO: >>> kubeConfig: /root/.kube/config
I0130 22:55:13.270723       8 log.go:172] (0xc002088630) (0xc001cd4140) Create stream
I0130 22:55:13.270845       8 log.go:172] (0xc002088630) (0xc001cd4140) Stream added, broadcasting: 1
I0130 22:55:13.274008       8 log.go:172] (0xc002088630) Reply frame received for 1
I0130 22:55:13.274084       8 log.go:172] (0xc002088630) (0xc00240c640) Create stream
I0130 22:55:13.274100       8 log.go:172] (0xc002088630) (0xc00240c640) Stream added, broadcasting: 3
I0130 22:55:13.275729       8 log.go:172] (0xc002088630) Reply frame received for 3
I0130 22:55:13.275763       8 log.go:172] (0xc002088630) (0xc0016ee3c0) Create stream
I0130 22:55:13.275773       8 log.go:172] (0xc002088630) (0xc0016ee3c0) Stream added, broadcasting: 5
I0130 22:55:13.277014       8 log.go:172] (0xc002088630) Reply frame received for 5
I0130 22:55:13.351324       8 log.go:172] (0xc002088630) Data frame received for 3
I0130 22:55:13.351388       8 log.go:172] (0xc00240c640) (3) Data frame handling
I0130 22:55:13.351420       8 log.go:172] (0xc00240c640) (3) Data frame sent
I0130 22:55:13.490855       8 log.go:172] (0xc002088630) Data frame received for 1
I0130 22:55:13.491100       8 log.go:172] (0xc002088630) (0xc00240c640) Stream removed, broadcasting: 3
I0130 22:55:13.491342       8 log.go:172] (0xc001cd4140) (1) Data frame handling
I0130 22:55:13.491430       8 log.go:172] (0xc001cd4140) (1) Data frame sent
I0130 22:55:13.491632       8 log.go:172] (0xc002088630) (0xc001cd4140) Stream removed, broadcasting: 1
I0130 22:55:13.491718       8 log.go:172] (0xc002088630) (0xc0016ee3c0) Stream removed, broadcasting: 5
I0130 22:55:13.491786       8 log.go:172] (0xc002088630) Go away received
I0130 22:55:13.492481       8 log.go:172] (0xc002088630) (0xc001cd4140) Stream removed, broadcasting: 1
I0130 22:55:13.492509       8 log.go:172] (0xc002088630) (0xc00240c640) Stream removed, broadcasting: 3
I0130 22:55:13.492667       8 log.go:172] (0xc002088630) (0xc0016ee3c0) Stream removed, broadcasting: 5
Jan 30 22:55:13.492: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:55:13.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9799" for this suite.

• [SLOW TEST:32.820 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4418,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:55:13.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:55:13.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 30 22:55:16.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 create -f -'
Jan 30 22:55:20.094: INFO: stderr: ""
Jan 30 22:55:20.094: INFO: stdout: "e2e-test-crd-publish-openapi-7383-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 30 22:55:20.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 delete e2e-test-crd-publish-openapi-7383-crds test-foo'
Jan 30 22:55:20.303: INFO: stderr: ""
Jan 30 22:55:20.303: INFO: stdout: "e2e-test-crd-publish-openapi-7383-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 30 22:55:20.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 apply -f -'
Jan 30 22:55:22.259: INFO: stderr: ""
Jan 30 22:55:22.259: INFO: stdout: "e2e-test-crd-publish-openapi-7383-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 30 22:55:22.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 delete e2e-test-crd-publish-openapi-7383-crds test-foo'
Jan 30 22:55:22.622: INFO: stderr: ""
Jan 30 22:55:22.622: INFO: stdout: "e2e-test-crd-publish-openapi-7383-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 30 22:55:22.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 create -f -'
Jan 30 22:55:23.070: INFO: rc: 1
Jan 30 22:55:23.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 apply -f -'
Jan 30 22:55:23.505: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 30 22:55:23.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 create -f -'
Jan 30 22:55:24.074: INFO: rc: 1
Jan 30 22:55:24.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1163 apply -f -'
Jan 30 22:55:24.400: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 30 22:55:24.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7383-crds'
Jan 30 22:55:24.813: INFO: stderr: ""
Jan 30 22:55:24.813: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7383-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 30 22:55:24.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7383-crds.metadata'
Jan 30 22:55:25.135: INFO: stderr: ""
Jan 30 22:55:25.135: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7383-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 30 22:55:25.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7383-crds.spec'
Jan 30 22:55:25.502: INFO: stderr: ""
Jan 30 22:55:25.502: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7383-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 30 22:55:25.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7383-crds.spec.bars'
Jan 30 22:55:25.897: INFO: stderr: ""
Jan 30 22:55:25.897: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7383-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 30 22:55:25.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7383-crds.spec.bars2'
Jan 30 22:55:26.260: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:55:29.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1163" for this suite.

• [SLOW TEST:15.722 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":270,"skipped":4433,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:55:29.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:55:29.408: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 71.753904ms)
Jan 30 22:55:29.417: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.380964ms)
Jan 30 22:55:29.424: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.599133ms)
Jan 30 22:55:29.431: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.789143ms)
Jan 30 22:55:29.437: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.71227ms)
Jan 30 22:55:29.442: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.952972ms)
Jan 30 22:55:29.447: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.454314ms)
Jan 30 22:55:29.453: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.754558ms)
Jan 30 22:55:29.461: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.85731ms)
Jan 30 22:55:29.470: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.464041ms)
Jan 30 22:55:29.499: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 29.281562ms)
Jan 30 22:55:29.506: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.685047ms)
Jan 30 22:55:29.513: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.777073ms)
Jan 30 22:55:29.520: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.264121ms)
Jan 30 22:55:29.527: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.684286ms)
Jan 30 22:55:29.534: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.575773ms)
Jan 30 22:55:29.541: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.218847ms)
Jan 30 22:55:29.547: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.39908ms)
Jan 30 22:55:29.556: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.008912ms)
Jan 30 22:55:29.561: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.272639ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:55:29.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3679" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":271,"skipped":4434,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:55:29.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1110
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-1110
I0130 22:55:29.904870       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1110, replica count: 2
I0130 22:55:32.955797       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 22:55:35.956122       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0130 22:55:38.956955       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 30 22:55:38.957: INFO: Creating new exec pod
Jan 30 22:55:46.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1110 execpodh2z5v -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 30 22:55:46.507: INFO: stderr: "I0130 22:55:46.285210    4558 log.go:172] (0xc000451080) (0xc000699cc0) Create stream\nI0130 22:55:46.285431    4558 log.go:172] (0xc000451080) (0xc000699cc0) Stream added, broadcasting: 1\nI0130 22:55:46.289415    4558 log.go:172] (0xc000451080) Reply frame received for 1\nI0130 22:55:46.289562    4558 log.go:172] (0xc000451080) (0xc0009aa000) Create stream\nI0130 22:55:46.289575    4558 log.go:172] (0xc000451080) (0xc0009aa000) Stream added, broadcasting: 3\nI0130 22:55:46.292300    4558 log.go:172] (0xc000451080) Reply frame received for 3\nI0130 22:55:46.292344    4558 log.go:172] (0xc000451080) (0xc000699d60) Create stream\nI0130 22:55:46.292353    4558 log.go:172] (0xc000451080) (0xc000699d60) Stream added, broadcasting: 5\nI0130 22:55:46.294126    4558 log.go:172] (0xc000451080) Reply frame received for 5\nI0130 22:55:46.381341    4558 log.go:172] (0xc000451080) Data frame received for 5\nI0130 22:55:46.381424    4558 log.go:172] (0xc000699d60) (5) Data frame handling\nI0130 22:55:46.381451    4558 log.go:172] (0xc000699d60) (5) Data frame sent\nI0130 22:55:46.381464    4558 log.go:172] (0xc000451080) Data frame received for 5\nI0130 22:55:46.381473    4558 log.go:172] (0xc000699d60) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-serviceI0130 22:55:46.381505    4558 log.go:172] (0xc000699d60) (5) Data frame sent\nI0130 22:55:46.381511    4558 log.go:172] (0xc000451080) Data frame received for 5\nI0130 22:55:46.381517    4558 log.go:172] (0xc000699d60) (5) Data frame handling\nI0130 22:55:46.381529    4558 log.go:172] (0xc000699d60) (5) Data frame sent\n 80\nI0130 22:55:46.387594    4558 log.go:172] (0xc000451080) Data frame received for 5\nI0130 22:55:46.387611    4558 log.go:172] (0xc000699d60) (5) Data frame handling\nI0130 22:55:46.387625    4558 log.go:172] (0xc000699d60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0130 22:55:46.487917    4558 log.go:172] (0xc000451080) Data frame received for 1\nI0130 22:55:46.488123    4558 log.go:172] (0xc000451080) (0xc000699d60) Stream removed, broadcasting: 5\nI0130 22:55:46.488204    4558 log.go:172] (0xc000699cc0) (1) Data frame handling\nI0130 22:55:46.488238    4558 log.go:172] (0xc000699cc0) (1) Data frame sent\nI0130 22:55:46.488284    4558 log.go:172] (0xc000451080) (0xc0009aa000) Stream removed, broadcasting: 3\nI0130 22:55:46.488353    4558 log.go:172] (0xc000451080) (0xc000699cc0) Stream removed, broadcasting: 1\nI0130 22:55:46.488371    4558 log.go:172] (0xc000451080) Go away received\nI0130 22:55:46.490826    4558 log.go:172] (0xc000451080) (0xc000699cc0) Stream removed, broadcasting: 1\nI0130 22:55:46.491029    4558 log.go:172] (0xc000451080) (0xc0009aa000) Stream removed, broadcasting: 3\nI0130 22:55:46.491053    4558 log.go:172] (0xc000451080) (0xc000699d60) Stream removed, broadcasting: 5\n"
Jan 30 22:55:46.508: INFO: stdout: ""
Jan 30 22:55:46.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1110 execpodh2z5v -- /bin/sh -x -c nc -zv -t -w 2 10.96.98.229 80'
Jan 30 22:55:46.879: INFO: stderr: "I0130 22:55:46.672400    4580 log.go:172] (0xc0008382c0) (0xc0009466e0) Create stream\nI0130 22:55:46.672668    4580 log.go:172] (0xc0008382c0) (0xc0009466e0) Stream added, broadcasting: 1\nI0130 22:55:46.688252    4580 log.go:172] (0xc0008382c0) Reply frame received for 1\nI0130 22:55:46.688310    4580 log.go:172] (0xc0008382c0) (0xc00062a6e0) Create stream\nI0130 22:55:46.688322    4580 log.go:172] (0xc0008382c0) (0xc00062a6e0) Stream added, broadcasting: 3\nI0130 22:55:46.689457    4580 log.go:172] (0xc0008382c0) Reply frame received for 3\nI0130 22:55:46.689477    4580 log.go:172] (0xc0008382c0) (0xc0004514a0) Create stream\nI0130 22:55:46.689484    4580 log.go:172] (0xc0008382c0) (0xc0004514a0) Stream added, broadcasting: 5\nI0130 22:55:46.690886    4580 log.go:172] (0xc0008382c0) Reply frame received for 5\nI0130 22:55:46.766737    4580 log.go:172] (0xc0008382c0) Data frame received for 5\nI0130 22:55:46.766910    4580 log.go:172] (0xc0004514a0) (5) Data frame handling\nI0130 22:55:46.766966    4580 log.go:172] (0xc0004514a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.98.229 80\nI0130 22:55:46.768593    4580 log.go:172] (0xc0008382c0) Data frame received for 5\nI0130 22:55:46.768609    4580 log.go:172] (0xc0004514a0) (5) Data frame handling\nI0130 22:55:46.768629    4580 log.go:172] (0xc0004514a0) (5) Data frame sent\nConnection to 10.96.98.229 80 port [tcp/http] succeeded!\nI0130 22:55:46.856189    4580 log.go:172] (0xc0008382c0) (0xc00062a6e0) Stream removed, broadcasting: 3\nI0130 22:55:46.856451    4580 log.go:172] (0xc0008382c0) Data frame received for 1\nI0130 22:55:46.856474    4580 log.go:172] (0xc0009466e0) (1) Data frame handling\nI0130 22:55:46.856499    4580 log.go:172] (0xc0009466e0) (1) Data frame sent\nI0130 22:55:46.856752    4580 log.go:172] (0xc0008382c0) (0xc0009466e0) Stream removed, broadcasting: 1\nI0130 22:55:46.857989    4580 log.go:172] (0xc0008382c0) (0xc0004514a0) Stream removed, broadcasting: 5\nI0130 22:55:46.858027    4580 log.go:172] (0xc0008382c0) Go away received\nI0130 22:55:46.858345    4580 log.go:172] (0xc0008382c0) (0xc0009466e0) Stream removed, broadcasting: 1\nI0130 22:55:46.858373    4580 log.go:172] (0xc0008382c0) (0xc00062a6e0) Stream removed, broadcasting: 3\nI0130 22:55:46.858391    4580 log.go:172] (0xc0008382c0) (0xc0004514a0) Stream removed, broadcasting: 5\n"
Jan 30 22:55:46.879: INFO: stdout: ""
Jan 30 22:55:46.879: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:55:46.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1110" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.390 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":272,"skipped":4441,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:55:46.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2469.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2469.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2469.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 30 22:56:03.166: INFO: DNS probes using dns-2469/dns-test-933453f3-c9d8-4053-b46b-587ea735d455 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:56:03.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2469" for this suite.

• [SLOW TEST:16.313 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":273,"skipped":4471,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:56:03.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 30 22:56:03.392: INFO: Waiting up to 5m0s for pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21" in namespace "emptydir-6317" to be "success or failure"
Jan 30 22:56:03.397: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Pending", Reason="", readiness=false. Elapsed: 5.060739ms
Jan 30 22:56:05.413: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021022343s
Jan 30 22:56:07.419: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026687963s
Jan 30 22:56:09.425: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033206694s
Jan 30 22:56:11.432: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039436409s
Jan 30 22:56:13.446: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054011102s
STEP: Saw pod success
Jan 30 22:56:13.446: INFO: Pod "pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21" satisfied condition "success or failure"
Jan 30 22:56:13.451: INFO: Trying to get logs from node jerma-node pod pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21 container test-container: 
STEP: delete the pod
Jan 30 22:56:13.539: INFO: Waiting for pod pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21 to disappear
Jan 30 22:56:13.545: INFO: Pod pod-78bc0ff0-6e23-448d-8406-f206b9a6ce21 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:56:13.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6317" for this suite.

• [SLOW TEST:10.282 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4506,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:56:13.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jan 30 22:56:13.827: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:56:14.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2857" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":275,"skipped":4518,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:56:14.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:56:23.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3206" for this suite.

• [SLOW TEST:8.290 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4519,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jan 30 22:56:23.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 30 22:56:23.372: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392658 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 30 22:56:23.372: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392659 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 30 22:56:23.372: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392660 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 30 22:56:33.428: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392696 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 30 22:56:33.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392697 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 30 22:56:33.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2476 /api/v1/namespaces/watch-2476/configmaps/e2e-watch-test-label-changed c606029e-4b72-4ba9-a3d7-039269ae9bc3 5392698 0 2020-01-30 22:56:23 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jan 30 22:56:33.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2476" for this suite.

• [SLOW TEST:10.355 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":277,"skipped":4524,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSJan 30 22:56:33.446: INFO: Running AfterSuite actions on all nodes
Jan 30 22:56:33.446: INFO: Running AfterSuite actions on node 1
Jan 30 22:56:33.446: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":277,"skipped":4536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2315

Ran 278 of 4814 Specs in 6454.749 seconds
FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4536 Skipped
--- FAIL: TestE2E (6454.85s)
FAIL