I0720 20:50:09.412920 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0720 20:50:09.413179 6 e2e.go:109] Starting e2e run "028a60c4-aca1-4b1c-a4d1-f6b0cd25560b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595278208 - Will randomize all specs Will run 278 of 4843 specs Jul 20 20:50:09.485: INFO: >>> kubeConfig: /root/.kube/config Jul 20 20:50:09.490: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 20 20:50:09.547: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 20 20:50:09.587: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 20 20:50:09.587: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 20 20:50:09.587: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 20 20:50:09.594: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 20 20:50:09.594: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 20 20:50:09.594: INFO: e2e test version: v1.17.8 Jul 20 20:50:09.595: INFO: kube-apiserver version: v1.17.5 Jul 20 20:50:09.595: INFO: >>> kubeConfig: /root/.kube/config Jul 20 20:50:09.600: INFO: Cluster IP family: ipv4 [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:09.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Jul 20 20:50:09.689: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:13.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6233" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:13.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-fe4faa48-b99a-499a-8413-c96ec0a74874 STEP: Creating a pod to test consume configMaps Jul 20 20:50:13.970: INFO: Waiting up to 5m0s for pod "pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e" in namespace "configmap-1974" to be "success or failure" Jul 20 20:50:13.974: INFO: Pod "pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941663ms Jul 20 20:50:15.978: INFO: Pod "pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008005545s Jul 20 20:50:17.982: INFO: Pod "pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012296237s STEP: Saw pod success Jul 20 20:50:17.982: INFO: Pod "pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e" satisfied condition "success or failure" Jul 20 20:50:17.985: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e container configmap-volume-test: STEP: delete the pod Jul 20 20:50:18.023: INFO: Waiting for pod pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e to disappear Jul 20 20:50:18.027: INFO: Pod pod-configmaps-814a5cdf-142d-4b2c-85db-79b34ab6871e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:18.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1974" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":5,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:18.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 20:50:18.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea" in namespace "downward-api-8734" to be "success or failure" Jul 20 20:50:18.119: INFO: Pod "downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 17.597107ms Jul 20 20:50:20.123: INFO: Pod "downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021749065s Jul 20 20:50:22.127: INFO: Pod "downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025752812s STEP: Saw pod success Jul 20 20:50:22.127: INFO: Pod "downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea" satisfied condition "success or failure" Jul 20 20:50:22.129: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea container client-container: STEP: delete the pod Jul 20 20:50:22.181: INFO: Waiting for pod downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea to disappear Jul 20 20:50:22.208: INFO: Pod downwardapi-volume-ba10fc0e-3aa0-4485-bfe3-0afdcb32e8ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:22.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8734" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":12,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:22.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 20 20:50:22.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4822 /api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-watch-closed bd5aef9d-9068-4704-8476-7d3809eaeb03 2856607 0 2020-07-20 20:50:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 20 20:50:22.314: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4822 /api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-watch-closed bd5aef9d-9068-4704-8476-7d3809eaeb03 2856608 0 2020-07-20 20:50:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 20 20:50:22.365: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4822 /api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-watch-closed bd5aef9d-9068-4704-8476-7d3809eaeb03 2856609 0 2020-07-20 20:50:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 20 20:50:22.365: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4822 /api/v1/namespaces/watch-4822/configmaps/e2e-watch-test-watch-closed bd5aef9d-9068-4704-8476-7d3809eaeb03 2856610 0 2020-07-20 20:50:22 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:22.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4822" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":4,"skipped":22,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:22.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-dd65499d-9085-4ac7-9a8a-52fa686a82c8 STEP: Creating a pod to test consume secrets Jul 20 20:50:22.485: INFO: Waiting up to 5m0s for pod "pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712" in namespace "secrets-7259" to be "success or failure" Jul 20 20:50:22.503: INFO: Pod "pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712": Phase="Pending", Reason="", readiness=false. Elapsed: 18.137493ms Jul 20 20:50:24.507: INFO: Pod "pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022389023s Jul 20 20:50:26.511: INFO: Pod "pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026045716s STEP: Saw pod success Jul 20 20:50:26.511: INFO: Pod "pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712" satisfied condition "success or failure" Jul 20 20:50:26.513: INFO: Trying to get logs from node jerma-worker pod pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712 container secret-volume-test: STEP: delete the pod Jul 20 20:50:26.660: INFO: Waiting for pod pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712 to disappear Jul 20 20:50:26.790: INFO: Pod pod-secrets-38f1abb5-999d-44a7-977d-085a293cd712 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:26.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7259" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":27,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:26.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 20:50:27.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 20:50:29.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 20:50:31.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875027, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 20:50:34.635: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:34.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5134" for this suite. STEP: Destroying namespace "webhook-5134-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.144 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":6,"skipped":28,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:34.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 20:50:35.909: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 20:50:37.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875036, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 20:50:39.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875036, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875035, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 20:50:42.973: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:43.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9644" for this suite. STEP: Destroying namespace "webhook-9644-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.284 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":7,"skipped":36,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:43.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 20:50:44.451: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 20 20:50:46.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 20:50:48.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875044, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 20:50:51.654: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 20:50:51.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:52.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6713" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.745 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":8,"skipped":64,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:52.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:50:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6406" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":9,"skipped":65,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:50:53.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Jul 20 20:50:58.416: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2228 pod-service-account-919fb462-5a37-45d5-84ce-15455ea586d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 20 20:51:01.350: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2228 pod-service-account-919fb462-5a37-45d5-84ce-15455ea586d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 20 20:51:01.562: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2228 pod-service-account-919fb462-5a37-45d5-84ce-15455ea586d9 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:51:01.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2228" for this suite. • [SLOW TEST:7.995 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":10,"skipped":65,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:51:01.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 20:51:01.841: INFO: Waiting up to 5m0s for pod "busybox-user-65534-76e0ae02-74d0-47e8-9f49-c75fac4fdc45" in namespace "security-context-test-2631" to be "success or failure" Jul 20 20:51:01.850: INFO: Pod "busybox-user-65534-76e0ae02-74d0-47e8-9f49-c75fac4fdc45": Phase="Pending", Reason="", readiness=false. Elapsed: 9.05309ms Jul 20 20:51:03.854: INFO: Pod "busybox-user-65534-76e0ae02-74d0-47e8-9f49-c75fac4fdc45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012829179s Jul 20 20:51:05.858: INFO: Pod "busybox-user-65534-76e0ae02-74d0-47e8-9f49-c75fac4fdc45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017151922s Jul 20 20:51:05.858: INFO: Pod "busybox-user-65534-76e0ae02-74d0-47e8-9f49-c75fac4fdc45" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:51:05.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2631" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":73,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:51:05.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 20:51:05.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a" in namespace "projected-5336" to be "success or failure" Jul 20 20:51:06.001: INFO: Pod "downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.031593ms Jul 20 20:51:08.133: INFO: Pod "downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153449391s Jul 20 20:51:10.137: INFO: Pod "downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157591408s STEP: Saw pod success Jul 20 20:51:10.138: INFO: Pod "downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a" satisfied condition "success or failure" Jul 20 20:51:10.140: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a container client-container: STEP: delete the pod Jul 20 20:51:10.296: INFO: Waiting for pod downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a to disappear Jul 20 20:51:10.306: INFO: Pod downwardapi-volume-f93e58de-d97c-465e-9ce5-b78a45cc685a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:51:10.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5336" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:51:10.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 20 20:51:10.368: INFO: Waiting up to 5m0s for pod "pod-081128af-6451-48f5-b844-659c32b61d0a" in namespace "emptydir-9546" to be "success or failure" Jul 20 20:51:10.372: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992041ms Jul 20 20:51:12.383: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015244646s Jul 20 20:51:14.387: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019045901s Jul 20 20:51:16.433: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064771343s Jul 20 20:51:18.436: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068459343s STEP: Saw pod success Jul 20 20:51:18.437: INFO: Pod "pod-081128af-6451-48f5-b844-659c32b61d0a" satisfied condition "success or failure" Jul 20 20:51:18.439: INFO: Trying to get logs from node jerma-worker pod pod-081128af-6451-48f5-b844-659c32b61d0a container test-container: STEP: delete the pod Jul 20 20:51:18.457: INFO: Waiting for pod pod-081128af-6451-48f5-b844-659c32b61d0a to disappear Jul 20 20:51:18.476: INFO: Pod pod-081128af-6451-48f5-b844-659c32b61d0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:51:18.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9546" for this suite. • [SLOW TEST:8.170 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:51:18.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 20 20:51:19.325: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 20 20:51:21.335: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875079, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875079, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875079, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730875079, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 20:51:24.418: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 20:51:24.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:51:25.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8946" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.218 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":14,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:51:25.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 20 20:51:25.798: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 20:51:25.822: INFO: Waiting for terminating namespaces to be deleted... Jul 20 20:51:25.825: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 20 20:51:25.830: INFO: rally-3956bea4-y4vzo778-khwkh from c-rally-3956bea4-ddnvn1he started at 2020-07-20 20:51:10 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.831: INFO: Container rally-3956bea4-y4vzo778 ready: true, restart count 0 Jul 20 20:51:25.831: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.831: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 20:51:25.831: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.831: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 20:51:25.831: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 20 20:51:25.853: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.853: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 20:51:25.853: INFO: rally-3956bea4-y4vzo778-24c9d from c-rally-3956bea4-ddnvn1he started at 2020-07-20 20:51:10 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.853: INFO: Container rally-3956bea4-y4vzo778 ready: true, restart count 0 Jul 20 20:51:25.853: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 20 20:51:25.853: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-082957d4-bbe0-4e0e-95a2-8c6086141b11 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-082957d4-bbe0-4e0e-95a2-8c6086141b11 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-082957d4-bbe0-4e0e-95a2-8c6086141b11 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:56:33.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3093" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.308 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":15,"skipped":217,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:56:34.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 20:56:34.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 20 20:56:34.702: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:34Z generation:1 name:name1 resourceVersion:2858875 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:69d3a513-fa8e-45a4-83b4-83dfc8ec12e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 20 20:56:44.707: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:44Z generation:1 name:name2 resourceVersion:2858965 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c7ae1c2e-0c45-4cec-b103-5c717882c66a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 20 20:56:54.712: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:34Z generation:2 name:name1 resourceVersion:2859002 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:69d3a513-fa8e-45a4-83b4-83dfc8ec12e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 20 20:57:04.718: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:44Z generation:2 name:name2 resourceVersion:2859030 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c7ae1c2e-0c45-4cec-b103-5c717882c66a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 20 20:57:14.726: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:34Z generation:2 name:name1 resourceVersion:2859058 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:69d3a513-fa8e-45a4-83b4-83dfc8ec12e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 20 20:57:24.734: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-20T20:56:44Z generation:2 name:name2 resourceVersion:2859089 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c7ae1c2e-0c45-4cec-b103-5c717882c66a] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:57:35.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2481" for this suite. • [SLOW TEST:61.257 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":16,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:57:35.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-bc28a408-f834-4984-ad06-7932e99cffd8 in namespace container-probe-2262 Jul 20 20:57:39.350: INFO: Started pod liveness-bc28a408-f834-4984-ad06-7932e99cffd8 in namespace container-probe-2262 STEP: checking the pod's current state and verifying that restartCount is present Jul 20 20:57:39.352: INFO: Initial restart count of pod liveness-bc28a408-f834-4984-ad06-7932e99cffd8 is 0 Jul 20 20:58:03.418: INFO: Restart count of pod container-probe-2262/liveness-bc28a408-f834-4984-ad06-7932e99cffd8 is now 1 (24.065170309s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 20:58:03.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2262" for this suite. • [SLOW TEST:28.219 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":247,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 20:58:03.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7470 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-7470 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7470 Jul 20 20:58:04.066: INFO: Found 0 stateful pods, waiting for 1 Jul 20 20:58:14.071: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 20 20:58:14.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 20:58:14.358: INFO: stderr: "I0720 20:58:14.243305 106 log.go:172] (0xc000454f20) (0xc0009b86e0) Create stream\nI0720 20:58:14.243355 106 log.go:172] (0xc000454f20) (0xc0009b86e0) Stream added, broadcasting: 1\nI0720 20:58:14.247321 106 log.go:172] (0xc000454f20) Reply frame received for 1\nI0720 20:58:14.247420 106 log.go:172] (0xc000454f20) (0xc000aa0280) Create stream\nI0720 20:58:14.247459 106 log.go:172] (0xc000454f20) (0xc000aa0280) Stream added, broadcasting: 3\nI0720 20:58:14.249222 106 log.go:172] (0xc000454f20) Reply frame received for 3\nI0720 20:58:14.249269 106 log.go:172] (0xc000454f20) (0xc0007e1b80) Create stream\nI0720 20:58:14.249287 106 log.go:172] (0xc000454f20) (0xc0007e1b80) Stream added, broadcasting: 5\nI0720 20:58:14.250370 106 log.go:172] (0xc000454f20) Reply frame received for 5\nI0720 20:58:14.306383 106 log.go:172] (0xc000454f20) Data frame received for 5\nI0720 20:58:14.306411 106 log.go:172] (0xc0007e1b80) (5) Data frame handling\nI0720 20:58:14.306429 106 log.go:172] (0xc0007e1b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 20:58:14.351182 106 log.go:172] (0xc000454f20) Data frame received for 3\nI0720 20:58:14.351220 106 log.go:172] (0xc000aa0280) (3) Data frame handling\nI0720 20:58:14.351250 106 log.go:172] (0xc000aa0280) (3) Data frame sent\nI0720 20:58:14.351292 106 log.go:172] (0xc000454f20) Data frame received for 3\nI0720 20:58:14.351306 106 log.go:172] (0xc000aa0280) (3) Data frame handling\nI0720 20:58:14.351565 106 log.go:172] (0xc000454f20) Data frame received for 5\nI0720 20:58:14.351577 106 log.go:172] (0xc0007e1b80) (5) Data frame handling\nI0720 20:58:14.353619 106 log.go:172] (0xc000454f20) Data frame received for 1\nI0720 20:58:14.353635 106 log.go:172] (0xc0009b86e0) (1) Data frame handling\nI0720 20:58:14.353643 106 log.go:172] (0xc0009b86e0) (1) Data frame sent\nI0720 20:58:14.353654 106 log.go:172] (0xc000454f20) (0xc0009b86e0) Stream removed, broadcasting: 1\nI0720 20:58:14.353666 106 log.go:172] (0xc000454f20) Go away received\nI0720 20:58:14.353961 106 log.go:172] (0xc000454f20) (0xc0009b86e0) Stream removed, broadcasting: 1\nI0720 20:58:14.353981 106 log.go:172] (0xc000454f20) (0xc000aa0280) Stream removed, broadcasting: 3\nI0720 20:58:14.353991 106 log.go:172] (0xc000454f20) (0xc0007e1b80) Stream removed, broadcasting: 5\n" Jul 20 20:58:14.358: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 20:58:14.358: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 20:58:14.361: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 20 20:58:24.366: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 20:58:24.366: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 20:58:24.446: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:24.446: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:15 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:24.446: INFO: ss-1 Pending [] Jul 20 20:58:24.446: INFO: Jul 20 20:58:24.446: INFO: StatefulSet ss has not reached scale 3, at 2 Jul 20 20:58:25.451: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.929359076s Jul 20 20:58:26.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9247547s Jul 20 20:58:27.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.833480094s Jul 20 20:58:28.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.829251605s Jul 20 20:58:29.555: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.825223255s Jul 20 20:58:30.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.820723909s Jul 20 20:58:31.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.816873186s Jul 20 20:58:32.568: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.812360778s Jul 20 20:58:33.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 807.54802ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7470 Jul 20 20:58:34.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:58:34.780: INFO: stderr: "I0720 20:58:34.702330 128 log.go:172] (0xc0009e8840) (0xc00097a0a0) Create stream\nI0720 20:58:34.702375 128 log.go:172] (0xc0009e8840) (0xc00097a0a0) Stream added, broadcasting: 1\nI0720 20:58:34.704321 128 log.go:172] (0xc0009e8840) Reply frame received for 1\nI0720 20:58:34.704396 128 log.go:172] (0xc0009e8840) (0xc00097c000) Create stream\nI0720 20:58:34.704412 128 log.go:172] (0xc0009e8840) (0xc00097c000) Stream added, broadcasting: 3\nI0720 20:58:34.705480 128 log.go:172] (0xc0009e8840) Reply frame received for 3\nI0720 20:58:34.705517 128 log.go:172] (0xc0009e8840) (0xc00097a1e0) Create stream\nI0720 20:58:34.705530 128 log.go:172] (0xc0009e8840) (0xc00097a1e0) Stream added, broadcasting: 5\nI0720 20:58:34.706285 128 log.go:172] (0xc0009e8840) Reply frame received for 5\nI0720 20:58:34.772413 128 log.go:172] (0xc0009e8840) Data frame received for 3\nI0720 20:58:34.772438 128 log.go:172] (0xc00097c000) (3) Data frame handling\nI0720 20:58:34.772458 128 log.go:172] (0xc00097c000) (3) Data frame sent\nI0720 20:58:34.772574 128 log.go:172] (0xc0009e8840) Data frame received for 3\nI0720 20:58:34.772599 128 log.go:172] (0xc00097c000) (3) Data frame handling\nI0720 20:58:34.772622 128 log.go:172] (0xc0009e8840) Data frame received for 5\nI0720 20:58:34.772644 128 log.go:172] (0xc00097a1e0) (5) Data frame handling\nI0720 20:58:34.772657 128 log.go:172] (0xc00097a1e0) (5) Data frame sent\nI0720 20:58:34.772671 128 log.go:172] (0xc0009e8840) Data frame received for 5\nI0720 20:58:34.772681 128 log.go:172] (0xc00097a1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 20:58:34.774614 128 log.go:172] (0xc0009e8840) Data frame received for 1\nI0720 20:58:34.774658 128 log.go:172] (0xc00097a0a0) (1) Data frame handling\nI0720 20:58:34.774690 128 log.go:172] (0xc00097a0a0) (1) Data frame sent\nI0720 20:58:34.774720 128 log.go:172] (0xc0009e8840) (0xc00097a0a0) Stream removed, broadcasting: 1\nI0720 20:58:34.774751 128 log.go:172] (0xc0009e8840) Go away received\nI0720 20:58:34.775118 128 log.go:172] (0xc0009e8840) (0xc00097a0a0) Stream removed, broadcasting: 1\nI0720 20:58:34.775140 128 log.go:172] (0xc0009e8840) (0xc00097c000) Stream removed, broadcasting: 3\nI0720 20:58:34.775153 128 log.go:172] (0xc0009e8840) (0xc00097a1e0) Stream removed, broadcasting: 5\n" Jul 20 20:58:34.780: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 20:58:34.781: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 20:58:34.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:58:35.004: INFO: stderr: "I0720 20:58:34.925972 148 log.go:172] (0xc000ac6630) (0xc000af8000) Create stream\nI0720 20:58:34.926030 148 log.go:172] (0xc000ac6630) (0xc000af8000) Stream added, broadcasting: 1\nI0720 20:58:34.928621 148 log.go:172] (0xc000ac6630) Reply frame received for 1\nI0720 20:58:34.928662 148 log.go:172] (0xc000ac6630) (0xc000719ae0) Create stream\nI0720 20:58:34.928675 148 log.go:172] (0xc000ac6630) (0xc000719ae0) Stream added, broadcasting: 3\nI0720 20:58:34.929766 148 log.go:172] (0xc000ac6630) Reply frame received for 3\nI0720 20:58:34.929806 148 log.go:172] (0xc000ac6630) (0xc000af80a0) Create stream\nI0720 20:58:34.929818 148 log.go:172] (0xc000ac6630) (0xc000af80a0) Stream added, broadcasting: 5\nI0720 20:58:34.930708 148 log.go:172] (0xc000ac6630) Reply frame received for 5\nI0720 20:58:34.997252 148 log.go:172] (0xc000ac6630) Data frame received for 3\nI0720 20:58:34.997286 148 log.go:172] (0xc000719ae0) (3) Data frame handling\nI0720 20:58:34.997309 148 log.go:172] (0xc000ac6630) Data frame received for 5\nI0720 20:58:34.997343 148 log.go:172] (0xc000af80a0) (5) Data frame handling\nI0720 20:58:34.997364 148 log.go:172] (0xc000af80a0) (5) Data frame sent\nI0720 20:58:34.997383 148 log.go:172] (0xc000ac6630) Data frame received for 5\nI0720 20:58:34.997411 148 log.go:172] (0xc000af80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 20:58:34.997450 148 log.go:172] (0xc000719ae0) (3) Data frame sent\nI0720 20:58:34.997510 148 log.go:172] (0xc000ac6630) Data frame received for 3\nI0720 20:58:34.997533 148 log.go:172] (0xc000719ae0) (3) Data frame handling\nI0720 20:58:34.999004 148 log.go:172] (0xc000ac6630) Data frame received for 1\nI0720 20:58:34.999038 148 log.go:172] (0xc000af8000) (1) Data frame handling\nI0720 20:58:34.999067 148 log.go:172] (0xc000af8000) (1) Data frame sent\nI0720 20:58:34.999091 148 log.go:172] (0xc000ac6630) (0xc000af8000) Stream removed, broadcasting: 1\nI0720 20:58:34.999112 148 log.go:172] (0xc000ac6630) Go away received\nI0720 20:58:34.999566 148 log.go:172] (0xc000ac6630) (0xc000af8000) Stream removed, broadcasting: 1\nI0720 20:58:34.999597 148 log.go:172] (0xc000ac6630) (0xc000719ae0) Stream removed, broadcasting: 3\nI0720 20:58:34.999611 148 log.go:172] (0xc000ac6630) (0xc000af80a0) Stream removed, broadcasting: 5\n" Jul 20 20:58:35.004: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 20:58:35.004: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 20:58:35.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:58:35.201: INFO: stderr: "I0720 20:58:35.136822 170 log.go:172] (0xc0009fca50) (0xc000679b80) Create stream\nI0720 20:58:35.136942 170 log.go:172] (0xc0009fca50) (0xc000679b80) Stream added, broadcasting: 1\nI0720 20:58:35.139939 170 log.go:172] (0xc0009fca50) Reply frame received for 1\nI0720 20:58:35.140035 170 log.go:172] (0xc0009fca50) (0xc0008c2000) Create stream\nI0720 20:58:35.140054 170 log.go:172] (0xc0009fca50) (0xc0008c2000) Stream added, broadcasting: 3\nI0720 20:58:35.141301 170 log.go:172] (0xc0009fca50) Reply frame received for 3\nI0720 20:58:35.141333 170 log.go:172] (0xc0009fca50) (0xc000679d60) Create stream\nI0720 20:58:35.141342 170 log.go:172] (0xc0009fca50) (0xc000679d60) Stream added, broadcasting: 5\nI0720 20:58:35.142272 170 log.go:172] (0xc0009fca50) Reply frame received for 5\nI0720 20:58:35.192490 170 log.go:172] (0xc0009fca50) Data frame received for 3\nI0720 20:58:35.192520 170 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0720 20:58:35.192527 170 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0720 20:58:35.192545 170 log.go:172] (0xc0009fca50) Data frame received for 5\nI0720 20:58:35.192552 170 log.go:172] (0xc000679d60) (5) Data frame handling\nI0720 20:58:35.192574 170 log.go:172] (0xc000679d60) (5) Data frame sent\nI0720 20:58:35.192581 170 log.go:172] (0xc0009fca50) Data frame received for 5\nI0720 20:58:35.192586 170 log.go:172] (0xc000679d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0720 20:58:35.192715 170 log.go:172] (0xc0009fca50) Data frame received for 3\nI0720 20:58:35.192792 170 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0720 20:58:35.196997 170 log.go:172] (0xc0009fca50) Data frame received for 1\nI0720 20:58:35.197021 170 log.go:172] (0xc000679b80) (1) Data frame handling\nI0720 20:58:35.197033 170 log.go:172] (0xc000679b80) (1) Data frame sent\nI0720 20:58:35.197044 170 log.go:172] (0xc0009fca50) (0xc000679b80) Stream removed, broadcasting: 1\nI0720 20:58:35.197060 170 log.go:172] (0xc0009fca50) Go away received\nI0720 20:58:35.197411 170 log.go:172] (0xc0009fca50) (0xc000679b80) Stream removed, broadcasting: 1\nI0720 20:58:35.197431 170 log.go:172] (0xc0009fca50) (0xc0008c2000) Stream removed, broadcasting: 3\nI0720 20:58:35.197443 170 log.go:172] (0xc0009fca50) (0xc000679d60) Stream removed, broadcasting: 5\n" Jul 20 20:58:35.201: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jul 20 20:58:35.201: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jul 20 20:58:35.224: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jul 20 20:58:45.230: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 20 20:58:45.230: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 20 20:58:45.230: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 20 20:58:45.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 20:58:45.450: INFO: stderr: "I0720 20:58:45.363766 193 log.go:172] (0xc0005106e0) (0xc0006348c0) Create stream\nI0720 20:58:45.363821 193 log.go:172] (0xc0005106e0) (0xc0006348c0) Stream added, broadcasting: 1\nI0720 20:58:45.366568 193 log.go:172] (0xc0005106e0) Reply frame received for 1\nI0720 20:58:45.366624 193 log.go:172] (0xc0005106e0) (0xc0006ff680) Create stream\nI0720 20:58:45.366639 193 log.go:172] (0xc0005106e0) (0xc0006ff680) Stream added, broadcasting: 3\nI0720 20:58:45.367479 193 log.go:172] (0xc0005106e0) Reply frame received for 3\nI0720 20:58:45.367522 193 log.go:172] (0xc0005106e0) (0xc000c26000) Create stream\nI0720 20:58:45.367539 193 log.go:172] (0xc0005106e0) (0xc000c26000) Stream added, broadcasting: 5\nI0720 20:58:45.368462 193 log.go:172] (0xc0005106e0) Reply frame received for 5\nI0720 20:58:45.442844 193 log.go:172] (0xc0005106e0) Data frame received for 5\nI0720 20:58:45.442878 193 log.go:172] (0xc000c26000) (5) Data frame handling\nI0720 20:58:45.442890 193 log.go:172] (0xc000c26000) (5) Data frame sent\nI0720 20:58:45.442898 193 log.go:172] (0xc0005106e0) Data frame received for 5\nI0720 20:58:45.442908 193 log.go:172] (0xc000c26000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 20:58:45.442933 193 log.go:172] (0xc0005106e0) Data frame received for 3\nI0720 20:58:45.442942 193 log.go:172] (0xc0006ff680) (3) Data frame handling\nI0720 20:58:45.442951 193 log.go:172] (0xc0006ff680) (3) Data frame sent\nI0720 20:58:45.442959 193 log.go:172] (0xc0005106e0) Data frame received for 3\nI0720 20:58:45.442967 193 log.go:172] (0xc0006ff680) (3) Data frame handling\nI0720 20:58:45.444305 193 log.go:172] (0xc0005106e0) Data frame received for 1\nI0720 20:58:45.444320 193 log.go:172] (0xc0006348c0) (1) Data frame handling\nI0720 20:58:45.444329 193 log.go:172] (0xc0006348c0) (1) Data frame sent\nI0720 20:58:45.444340 193 log.go:172] (0xc0005106e0) (0xc0006348c0) Stream removed, broadcasting: 1\nI0720 20:58:45.444410 193 log.go:172] (0xc0005106e0) Go away received\nI0720 20:58:45.444659 193 log.go:172] (0xc0005106e0) (0xc0006348c0) Stream removed, broadcasting: 1\nI0720 20:58:45.444674 193 log.go:172] (0xc0005106e0) (0xc0006ff680) Stream removed, broadcasting: 3\nI0720 20:58:45.444684 193 log.go:172] (0xc0005106e0) (0xc000c26000) Stream removed, broadcasting: 5\n" Jul 20 20:58:45.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 20:58:45.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 20:58:45.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 20:58:45.709: INFO: stderr: "I0720 20:58:45.597466 214 log.go:172] (0xc0001053f0) (0xc000a32000) Create stream\nI0720 20:58:45.597533 214 log.go:172] (0xc0001053f0) (0xc000a32000) Stream added, broadcasting: 1\nI0720 20:58:45.600271 214 log.go:172] (0xc0001053f0) Reply frame received for 1\nI0720 20:58:45.600310 214 log.go:172] (0xc0001053f0) (0xc000a320a0) Create stream\nI0720 20:58:45.600321 214 log.go:172] (0xc0001053f0) (0xc000a320a0) Stream added, broadcasting: 3\nI0720 20:58:45.601375 214 log.go:172] (0xc0001053f0) Reply frame received for 3\nI0720 20:58:45.601431 214 log.go:172] (0xc0001053f0) (0xc000a321e0) Create stream\nI0720 20:58:45.601453 214 log.go:172] (0xc0001053f0) (0xc000a321e0) Stream added, broadcasting: 5\nI0720 20:58:45.602352 214 log.go:172] (0xc0001053f0) Reply frame received for 5\nI0720 20:58:45.670650 214 log.go:172] (0xc0001053f0) Data frame received for 5\nI0720 20:58:45.670675 214 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0720 20:58:45.670697 214 log.go:172] (0xc000a321e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 20:58:45.702571 214 log.go:172] (0xc0001053f0) Data frame received for 5\nI0720 20:58:45.702615 214 log.go:172] (0xc000a321e0) (5) Data frame handling\nI0720 20:58:45.702648 214 log.go:172] (0xc0001053f0) Data frame received for 3\nI0720 20:58:45.702659 214 log.go:172] (0xc000a320a0) (3) Data frame handling\nI0720 20:58:45.702670 214 log.go:172] (0xc000a320a0) (3) Data frame sent\nI0720 20:58:45.702686 214 log.go:172] (0xc0001053f0) Data frame received for 3\nI0720 20:58:45.702694 214 log.go:172] (0xc000a320a0) (3) Data frame handling\nI0720 20:58:45.704407 214 log.go:172] (0xc0001053f0) Data frame received for 1\nI0720 20:58:45.704435 214 log.go:172] (0xc000a32000) (1) Data frame handling\nI0720 20:58:45.704454 214 log.go:172] (0xc000a32000) (1) Data frame sent\nI0720 20:58:45.704469 214 log.go:172] (0xc0001053f0) (0xc000a32000) Stream removed, broadcasting: 1\nI0720 20:58:45.704489 214 log.go:172] (0xc0001053f0) Go away received\nI0720 20:58:45.704849 214 log.go:172] (0xc0001053f0) (0xc000a32000) Stream removed, broadcasting: 1\nI0720 20:58:45.704866 214 log.go:172] (0xc0001053f0) (0xc000a320a0) Stream removed, broadcasting: 3\nI0720 20:58:45.704875 214 log.go:172] (0xc0001053f0) (0xc000a321e0) Stream removed, broadcasting: 5\n" Jul 20 20:58:45.709: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 20:58:45.709: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 20:58:45.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jul 20 20:58:45.924: INFO: stderr: "I0720 20:58:45.832148 234 log.go:172] (0xc000b954a0) (0xc0009e2460) Create stream\nI0720 20:58:45.832200 234 log.go:172] (0xc000b954a0) (0xc0009e2460) Stream added, broadcasting: 1\nI0720 20:58:45.835979 234 log.go:172] (0xc000b954a0) Reply frame received for 1\nI0720 20:58:45.836040 234 log.go:172] (0xc000b954a0) (0xc00066e640) Create stream\nI0720 20:58:45.836059 234 log.go:172] (0xc000b954a0) (0xc00066e640) Stream added, broadcasting: 3\nI0720 20:58:45.837141 234 log.go:172] (0xc000b954a0) Reply frame received for 3\nI0720 20:58:45.837175 234 log.go:172] (0xc000b954a0) (0xc000427400) Create stream\nI0720 20:58:45.837185 234 log.go:172] (0xc000b954a0) (0xc000427400) Stream added, broadcasting: 5\nI0720 20:58:45.837984 234 log.go:172] (0xc000b954a0) Reply frame received for 5\nI0720 20:58:45.882498 234 log.go:172] (0xc000b954a0) Data frame received for 5\nI0720 20:58:45.882520 234 log.go:172] (0xc000427400) (5) Data frame handling\nI0720 20:58:45.882532 234 log.go:172] (0xc000427400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 20:58:45.918436 234 log.go:172] (0xc000b954a0) Data frame received for 3\nI0720 20:58:45.918453 234 log.go:172] (0xc00066e640) (3) Data frame handling\nI0720 20:58:45.918467 234 log.go:172] (0xc000b954a0) Data frame received for 5\nI0720 20:58:45.918485 234 log.go:172] (0xc000427400) (5) Data frame handling\nI0720 20:58:45.918574 234 log.go:172] (0xc00066e640) (3) Data frame sent\nI0720 20:58:45.918655 234 log.go:172] (0xc000b954a0) Data frame received for 3\nI0720 20:58:45.918676 234 log.go:172] (0xc00066e640) (3) Data frame handling\nI0720 20:58:45.920181 234 log.go:172] (0xc000b954a0) Data frame received for 1\nI0720 20:58:45.920203 234 log.go:172] (0xc0009e2460) (1) Data frame handling\nI0720 20:58:45.920216 234 log.go:172] (0xc0009e2460) (1) Data frame sent\nI0720 20:58:45.920233 234 log.go:172] (0xc000b954a0) (0xc0009e2460) Stream removed, broadcasting: 1\nI0720 20:58:45.920316 234 log.go:172] (0xc000b954a0) Go away received\nI0720 20:58:45.920499 234 log.go:172] (0xc000b954a0) (0xc0009e2460) Stream removed, broadcasting: 1\nI0720 20:58:45.920514 234 log.go:172] (0xc000b954a0) (0xc00066e640) Stream removed, broadcasting: 3\nI0720 20:58:45.920524 234 log.go:172] (0xc000b954a0) (0xc000427400) Stream removed, broadcasting: 5\n" Jul 20 20:58:45.925: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jul 20 20:58:45.925: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jul 20 20:58:45.925: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 20:58:45.928: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jul 20 20:58:55.935: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 20 20:58:55.935: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 20 20:58:55.935: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 20 20:58:55.950: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:55.950: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:55.950: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:55.950: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:55.950: INFO: Jul 20 20:58:55.950: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 20:58:56.955: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:56.955: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:56.955: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:56.955: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:56.955: INFO: Jul 20 20:58:56.955: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 20:58:57.969: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:57.969: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:57.969: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:57.969: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:57.969: INFO: Jul 20 20:58:57.969: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 20:58:58.974: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:58.974: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:58.975: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:58.975: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:24 +0000 UTC }] Jul 20 20:58:58.975: INFO: Jul 20 20:58:58.975: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 20 20:58:59.978: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:58:59.978: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:58:59.978: INFO: Jul 20 20:58:59.978: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 20 20:59:00.982: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:59:00.982: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:59:00.982: INFO: Jul 20 20:59:00.982: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 20 20:59:02.002: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:59:02.002: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:59:02.002: INFO: Jul 20 20:59:02.002: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 20 20:59:03.006: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:59:03.006: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:59:03.006: INFO: Jul 20 20:59:03.006: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 20 20:59:04.011: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:59:04.011: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:59:04.011: INFO: Jul 20 20:59:04.011: INFO: StatefulSet ss has not reached scale 0, at 1 Jul 20 20:59:05.015: INFO: POD NODE PHASE GRACE CONDITIONS Jul 20 20:59:05.015: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-20 20:58:04 +0000 UTC }] Jul 20 20:59:05.015: INFO: Jul 20 20:59:05.015: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7470 Jul 20 20:59:06.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:06.173: INFO: rc: 1 Jul 20 20:59:06.174: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jul 20 20:59:16.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:16.276: INFO: rc: 1 Jul 20 20:59:16.276: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 20:59:26.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:26.375: INFO: rc: 1 Jul 20 20:59:26.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 20:59:36.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:36.476: INFO: rc: 1 Jul 20 20:59:36.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 20:59:46.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:46.572: INFO: rc: 1 Jul 20 20:59:46.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 20:59:56.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 20:59:56.668: INFO: rc: 1 Jul 20 20:59:56.668: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:06.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:06.771: INFO: rc: 1 Jul 20 21:00:06.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:16.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:16.859: INFO: rc: 1 Jul 20 21:00:16.859: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:26.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:26.961: INFO: rc: 1 Jul 20 21:00:26.961: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:36.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:37.052: INFO: rc: 1 Jul 20 21:00:37.052: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:47.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:47.150: INFO: rc: 1 Jul 20 21:00:47.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:00:57.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:00:57.245: INFO: rc: 1 Jul 20 21:00:57.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:01:07.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:01:10.323: INFO: rc: 1 Jul 20 21:01:10.323: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:01:20.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:01:20.436: INFO: rc: 1 Jul 20 21:01:20.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:01:30.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:01:30.531: INFO: rc: 1 Jul 20 21:01:30.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:01:40.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:01:40.634: INFO: rc: 1 Jul 20 21:01:40.634: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:01:50.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:01:50.741: INFO: rc: 1 Jul 20 21:01:50.741: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:00.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:00.838: INFO: rc: 1 Jul 20 21:02:00.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:10.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:10.935: INFO: rc: 1 Jul 20 21:02:10.935: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:20.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:21.038: INFO: rc: 1 Jul 20 21:02:21.038: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:31.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:31.140: INFO: rc: 1 Jul 20 21:02:31.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:41.242: INFO: rc: 1 Jul 20 21:02:41.242: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:02:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:02:51.344: INFO: rc: 1 Jul 20 21:02:51.344: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:01.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:01.442: INFO: rc: 1 Jul 20 21:03:01.442: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:11.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:11.558: INFO: rc: 1 Jul 20 21:03:11.558: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:21.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:21.692: INFO: rc: 1 Jul 20 21:03:21.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:31.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:31.909: INFO: rc: 1 Jul 20 21:03:31.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:41.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:42.008: INFO: rc: 1 Jul 20 21:03:42.008: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:03:52.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:03:52.103: INFO: rc: 1 Jul 20 21:03:52.104: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:04:02.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:04:02.212: INFO: rc: 1 Jul 20 21:04:02.212: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 20 21:04:12.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7470 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jul 20 21:04:12.313: INFO: rc: 1 Jul 20 21:04:12.313: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jul 20 21:04:12.313: INFO: Scaling statefulset ss to 0 Jul 20 21:04:12.321: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jul 20 21:04:12.323: INFO: Deleting all statefulset in ns statefulset-7470 Jul 20 21:04:12.325: INFO: Scaling statefulset ss to 0 Jul 20 21:04:12.332: INFO: Waiting for statefulset status.replicas updated to 0 Jul 20 21:04:12.335: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:04:12.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7470" for this suite. • [SLOW TEST:368.870 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":18,"skipped":248,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:04:12.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6044.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6044.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6044.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 182.89.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.89.182_udp@PTR;check="$$(dig +tcp +noall +answer +search 182.89.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.89.182_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6044.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6044.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6044.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6044.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6044.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6044.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 182.89.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.89.182_udp@PTR;check="$$(dig +tcp +noall +answer +search 182.89.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.89.182_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:04:18.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.641: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.643: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.666: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.671: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.674: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:18.690: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:23.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.741: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.743: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.760: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.762: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.767: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:23.785: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:28.695: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.705: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.726: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.731: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:28.750: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:33.706: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.708: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.711: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.713: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.732: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:33.777: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:38.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.699: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.702: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.817: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.820: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.822: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:38.842: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:43.702: INFO: Unable to read wheezy_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.708: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.711: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.733: INFO: Unable to read jessie_udp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.736: INFO: Unable to read jessie_tcp@dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.738: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.740: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local from pod dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87: the server could not find the requested resource (get pods dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87) Jul 20 21:04:43.764: INFO: Lookups using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 failed for: [wheezy_udp@dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@dns-test-service.dns-6044.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_udp@dns-test-service.dns-6044.svc.cluster.local jessie_tcp@dns-test-service.dns-6044.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6044.svc.cluster.local] Jul 20 21:04:48.747: INFO: DNS probes using dns-6044/dns-test-6424fc9e-0bba-4312-bc23-aa78ad6c6b87 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:04:49.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6044" for this suite. • [SLOW TEST:37.086 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":19,"skipped":256,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:04:49.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:04.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1763" for this suite. STEP: Destroying namespace "nsdeletetest-3993" for this suite. Jul 20 21:05:04.945: INFO: Namespace nsdeletetest-3993 was already deleted STEP: Destroying namespace "nsdeletetest-9999" for this suite. • [SLOW TEST:15.501 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":20,"skipped":259,"failed":0} [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:04.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 20 21:05:09.518: INFO: Successfully updated pod "pod-update-activedeadlineseconds-205e6373-2229-440d-b956-d44b1d82539a" Jul 20 21:05:09.518: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-205e6373-2229-440d-b956-d44b1d82539a" in namespace "pods-830" to be "terminated due to deadline exceeded" Jul 20 21:05:09.606: INFO: Pod "pod-update-activedeadlineseconds-205e6373-2229-440d-b956-d44b1d82539a": Phase="Running", Reason="", readiness=true. Elapsed: 88.379727ms Jul 20 21:05:11.611: INFO: Pod "pod-update-activedeadlineseconds-205e6373-2229-440d-b956-d44b1d82539a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.092845704s Jul 20 21:05:11.611: INFO: Pod "pod-update-activedeadlineseconds-205e6373-2229-440d-b956-d44b1d82539a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-830" for this suite. • [SLOW TEST:6.671 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":259,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:11.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-65570dc3-3250-4521-9379-64ef66c44e0a STEP: Creating a pod to test consume configMaps Jul 20 21:05:11.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9" in namespace "projected-75" to be "success or failure" Jul 20 21:05:11.761: INFO: Pod "pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.270844ms Jul 20 21:05:13.783: INFO: Pod "pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051090101s Jul 20 21:05:15.787: INFO: Pod "pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054873138s STEP: Saw pod success Jul 20 21:05:15.787: INFO: Pod "pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9" satisfied condition "success or failure" Jul 20 21:05:15.790: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9 container projected-configmap-volume-test: STEP: delete the pod Jul 20 21:05:15.891: INFO: Waiting for pod pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9 to disappear Jul 20 21:05:15.903: INFO: Pod pod-projected-configmaps-b26996a7-bd1b-4e5e-b539-0bd447cbd7c9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:15.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-75" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:15.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 20 21:05:24.059: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 21:05:24.176: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 21:05:26.176: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 21:05:26.180: INFO: Pod pod-with-poststart-exec-hook still exists Jul 20 21:05:28.176: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 20 21:05:28.181: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:28.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9875" for this suite. • [SLOW TEST:12.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":288,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:28.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:35.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6865" for this suite. STEP: Destroying namespace "nsdeletetest-2412" for this suite. Jul 20 21:05:35.767: INFO: Namespace nsdeletetest-2412 was already deleted STEP: Destroying namespace "nsdeletetest-4062" for this suite. • [SLOW TEST:7.581 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":24,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:35.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:05:35.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 20 21:05:36.078: INFO: stderr: "" Jul 20 21:05:36.078: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.8\", GitCommit:\"35dc4cdc26cfcb6614059c4c6e836e5f0dc61dee\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:52:59Z\", GoVersion:\"go1.13.11\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:36.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2785" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":25,"skipped":315,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:36.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-2296 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2296 to expose endpoints map[] Jul 20 21:05:36.210: INFO: Get endpoints failed (3.235763ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 20 21:05:37.349: INFO: successfully validated that service multi-endpoint-test in namespace services-2296 exposes endpoints map[] (1.142364629s elapsed) STEP: Creating pod pod1 in namespace services-2296 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2296 to expose endpoints map[pod1:[100]] Jul 20 21:05:41.694: INFO: successfully validated that service multi-endpoint-test in namespace services-2296 exposes endpoints map[pod1:[100]] (4.308958396s elapsed) STEP: Creating pod pod2 in namespace services-2296 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2296 to expose endpoints map[pod1:[100] pod2:[101]] Jul 20 21:05:45.863: INFO: successfully validated that service multi-endpoint-test in namespace services-2296 exposes endpoints map[pod1:[100] pod2:[101]] (4.165489949s elapsed) STEP: Deleting pod pod1 in namespace services-2296 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2296 to expose endpoints map[pod2:[101]] Jul 20 21:05:45.882: INFO: successfully validated that service multi-endpoint-test in namespace services-2296 exposes endpoints map[pod2:[101]] (12.944815ms elapsed) STEP: Deleting pod pod2 in namespace services-2296 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2296 to expose endpoints map[] Jul 20 21:05:45.948: INFO: successfully validated that service multi-endpoint-test in namespace services-2296 exposes endpoints map[] (61.684117ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:45.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2296" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.241 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":26,"skipped":324,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:46.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jul 20 21:05:52.464: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8302 PodName:pod-sharedvolume-4680aca1-3367-47c9-b303-9b429c6cfdde ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 20 21:05:52.464: INFO: >>> kubeConfig: /root/.kube/config I0720 21:05:52.494931 6 log.go:172] (0xc001dec160) (0xc00296b400) Create stream I0720 21:05:52.494962 6 log.go:172] (0xc001dec160) (0xc00296b400) Stream added, broadcasting: 1 I0720 21:05:52.497470 6 log.go:172] (0xc001dec160) Reply frame received for 1 I0720 21:05:52.497534 6 log.go:172] (0xc001dec160) (0xc00278c000) Create stream I0720 21:05:52.497552 6 log.go:172] (0xc001dec160) (0xc00278c000) Stream added, broadcasting: 3 I0720 21:05:52.499160 6 log.go:172] (0xc001dec160) Reply frame received for 3 I0720 21:05:52.499234 6 log.go:172] (0xc001dec160) (0xc00185c280) Create stream I0720 21:05:52.499255 6 log.go:172] (0xc001dec160) (0xc00185c280) Stream added, broadcasting: 5 I0720 21:05:52.502102 6 log.go:172] (0xc001dec160) Reply frame received for 5 I0720 21:05:52.551270 6 log.go:172] (0xc001dec160) Data frame received for 5 I0720 21:05:52.551296 6 log.go:172] (0xc00185c280) (5) Data frame handling I0720 21:05:52.551311 6 log.go:172] (0xc001dec160) Data frame received for 3 I0720 21:05:52.551316 6 log.go:172] (0xc00278c000) (3) Data frame handling I0720 21:05:52.551328 6 log.go:172] (0xc00278c000) (3) Data frame sent I0720 21:05:52.551334 6 log.go:172] (0xc001dec160) Data frame received for 3 I0720 21:05:52.551340 6 log.go:172] (0xc00278c000) (3) Data frame handling I0720 21:05:52.553367 6 log.go:172] (0xc001dec160) Data frame received for 1 I0720 21:05:52.553390 6 log.go:172] (0xc00296b400) (1) Data frame handling I0720 21:05:52.553401 6 log.go:172] (0xc00296b400) (1) Data frame sent I0720 21:05:52.553418 6 log.go:172] (0xc001dec160) (0xc00296b400) Stream removed, broadcasting: 1 I0720 21:05:52.553437 6 log.go:172] (0xc001dec160) Go away received I0720 21:05:52.553852 6 log.go:172] (0xc001dec160) (0xc00296b400) Stream removed, broadcasting: 1 I0720 21:05:52.553875 6 log.go:172] (0xc001dec160) (0xc00278c000) Stream removed, broadcasting: 3 I0720 21:05:52.553888 6 log.go:172] (0xc001dec160) (0xc00185c280) Stream removed, broadcasting: 5 Jul 20 21:05:52.553: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:52.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8302" for this suite. • [SLOW TEST:6.232 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":27,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:52.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:05:52.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080" in namespace "downward-api-9201" to be "success or failure" Jul 20 21:05:52.685: INFO: Pod "downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226658ms Jul 20 21:05:54.739: INFO: Pod "downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057218099s Jul 20 21:05:56.742: INFO: Pod "downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060976073s STEP: Saw pod success Jul 20 21:05:56.742: INFO: Pod "downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080" satisfied condition "success or failure" Jul 20 21:05:56.745: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080 container client-container: STEP: delete the pod Jul 20 21:05:56.770: INFO: Waiting for pod downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080 to disappear Jul 20 21:05:56.828: INFO: Pod downwardapi-volume-3d043b9c-916c-4e4f-befa-8ce747c87080 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:05:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9201" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:05:56.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-737.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-737.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-737.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-737.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-737.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-737.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:06:03.119: INFO: DNS probes using dns-737/dns-test-bf50a386-36c4-4f61-b37d-51e8a54bb879 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:03.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-737" for this suite. • [SLOW TEST:6.390 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":29,"skipped":374,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:03.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1ac7b829-08f6-4656-a5a7-aa3fd87d6a15 STEP: Creating a pod to test consume secrets Jul 20 21:06:03.810: INFO: Waiting up to 5m0s for pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0" in namespace "secrets-1261" to be "success or failure" Jul 20 21:06:03.841: INFO: Pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.940835ms Jul 20 21:06:05.846: INFO: Pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035257936s Jul 20 21:06:07.973: INFO: Pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16259938s Jul 20 21:06:09.977: INFO: Pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166348499s STEP: Saw pod success Jul 20 21:06:09.977: INFO: Pod "pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0" satisfied condition "success or failure" Jul 20 21:06:09.979: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0 container secret-volume-test: STEP: delete the pod Jul 20 21:06:10.027: INFO: Waiting for pod pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0 to disappear Jul 20 21:06:10.038: INFO: Pod pod-secrets-a942f0bf-86eb-4ee2-a3f1-60d5974edde0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:10.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1261" for this suite. STEP: Destroying namespace "secret-namespace-4975" for this suite. • [SLOW TEST:6.856 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":378,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:10.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Jul 20 21:06:10.146: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 20 21:06:10.204: INFO: Waiting for terminating namespaces to be deleted... Jul 20 21:06:10.207: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Jul 20 21:06:10.212: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 20 21:06:10.212: INFO: Container kube-proxy ready: true, restart count 0 Jul 20 21:06:10.212: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded) Jul 20 21:06:10.212: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 21:06:10.212: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Jul 20 21:06:10.217: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 20 21:06:10.217: INFO: Container kindnet-cni ready: true, restart count 0 Jul 20 21:06:10.217: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded) Jul 20 21:06:10.217: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b195e69d-51d8-41bb-acc9-cb58da249e23 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b195e69d-51d8-41bb-acc9-cb58da249e23 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b195e69d-51d8-41bb-acc9-cb58da249e23 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:20.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2020" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.409 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":31,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:20.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Jul 20 21:06:20.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2115 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 20 21:06:20.883: INFO: stderr: "" Jul 20 21:06:20.883: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Jul 20 21:06:20.883: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 20 21:06:20.883: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2115" to be "running and ready, or succeeded" Jul 20 21:06:20.893: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.490589ms Jul 20 21:06:22.919: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035610848s Jul 20 21:06:24.923: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.039926104s Jul 20 21:06:24.923: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 20 21:06:24.923: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 20 21:06:24.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115' Jul 20 21:06:25.031: INFO: stderr: "" Jul 20 21:06:25.031: INFO: stdout: "I0720 21:06:23.627510 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/ks6 362\nI0720 21:06:23.829807 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/2rv 498\nI0720 21:06:24.027692 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/d9lf 310\nI0720 21:06:24.227624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/ktm9 496\nI0720 21:06:24.427684 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/nbwb 533\nI0720 21:06:24.627674 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/p26g 287\nI0720 21:06:24.827716 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/99p 519\n" STEP: limiting log lines Jul 20 21:06:25.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115 --tail=1' Jul 20 21:06:25.134: INFO: stderr: "" Jul 20 21:06:25.134: INFO: stdout: "I0720 21:06:25.027645 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/x76 260\n" Jul 20 21:06:25.134: INFO: got output "I0720 21:06:25.027645 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/x76 260\n" STEP: limiting log bytes Jul 20 21:06:25.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115 --limit-bytes=1' Jul 20 21:06:25.231: INFO: stderr: "" Jul 20 21:06:25.231: INFO: stdout: "I" Jul 20 21:06:25.231: INFO: got output "I" STEP: exposing timestamps Jul 20 21:06:25.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115 --tail=1 --timestamps' Jul 20 21:06:25.341: INFO: stderr: "" Jul 20 21:06:25.341: INFO: stdout: "2020-07-20T21:06:25.227772855Z I0720 21:06:25.227651 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/c4jb 234\n" Jul 20 21:06:25.341: INFO: got output "2020-07-20T21:06:25.227772855Z I0720 21:06:25.227651 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/c4jb 234\n" STEP: restricting to a time range Jul 20 21:06:27.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115 --since=1s' Jul 20 21:06:27.961: INFO: stderr: "" Jul 20 21:06:27.962: INFO: stdout: "I0720 21:06:27.027700 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/rxbp 451\nI0720 21:06:27.227674 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/s8k 358\nI0720 21:06:27.427653 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/v55 395\nI0720 21:06:27.627640 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/gq9 293\nI0720 21:06:27.827686 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/srg8 543\n" Jul 20 21:06:27.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2115 --since=24h' Jul 20 21:06:28.073: INFO: stderr: "" Jul 20 21:06:28.073: INFO: stdout: "I0720 21:06:23.627510 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/ks6 362\nI0720 21:06:23.829807 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/2rv 498\nI0720 21:06:24.027692 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/d9lf 310\nI0720 21:06:24.227624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/ktm9 496\nI0720 21:06:24.427684 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/nbwb 533\nI0720 21:06:24.627674 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/p26g 287\nI0720 21:06:24.827716 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/99p 519\nI0720 21:06:25.027645 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/x76 260\nI0720 21:06:25.227651 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/c4jb 234\nI0720 21:06:25.427648 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/w6g 287\nI0720 21:06:25.627662 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/2w6 323\nI0720 21:06:25.827665 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/cpm 265\nI0720 21:06:26.027667 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/sp9 415\nI0720 21:06:26.227651 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/tqzw 534\nI0720 21:06:26.427642 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/kbv6 290\nI0720 21:06:26.627632 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/wnwx 492\nI0720 21:06:26.827626 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/6bvn 439\nI0720 21:06:27.027700 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/rxbp 451\nI0720 21:06:27.227674 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/s8k 358\nI0720 21:06:27.427653 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/v55 395\nI0720 21:06:27.627640 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/gq9 293\nI0720 21:06:27.827686 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/srg8 543\nI0720 21:06:28.027648 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/t9p 561\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Jul 20 21:06:28.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2115' Jul 20 21:06:37.482: INFO: stderr: "" Jul 20 21:06:37.482: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:37.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2115" for this suite. • [SLOW TEST:16.995 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":32,"skipped":409,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:37.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:06:37.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3" in namespace "downward-api-1576" to be "success or failure" Jul 20 21:06:37.638: INFO: Pod "downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 46.541612ms Jul 20 21:06:39.642: INFO: Pod "downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050629071s Jul 20 21:06:41.647: INFO: Pod "downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054773614s STEP: Saw pod success Jul 20 21:06:41.647: INFO: Pod "downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3" satisfied condition "success or failure" Jul 20 21:06:41.649: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3 container client-container: STEP: delete the pod Jul 20 21:06:41.683: INFO: Waiting for pod downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3 to disappear Jul 20 21:06:41.691: INFO: Pod downwardapi-volume-eaf21921-6a69-49c1-8dd3-bdbce42ea8b3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:41.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1576" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":411,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:41.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:06:41.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5" in namespace "projected-1883" to be "success or failure" Jul 20 21:06:41.853: INFO: Pod "downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.993941ms Jul 20 21:06:43.859: INFO: Pod "downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009930103s Jul 20 21:06:46.051: INFO: Pod "downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201858026s STEP: Saw pod success Jul 20 21:06:46.051: INFO: Pod "downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5" satisfied condition "success or failure" Jul 20 21:06:46.054: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5 container client-container: STEP: delete the pod Jul 20 21:06:46.131: INFO: Waiting for pod downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5 to disappear Jul 20 21:06:46.182: INFO: Pod downwardapi-volume-3b79846a-8919-44ed-8c18-07b1fa3cedc5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:06:46.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1883" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:06:46.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Jul 20 21:06:46.253: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:07:01.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4789" for this suite. • [SLOW TEST:15.152 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":35,"skipped":463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:07:01.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 21:07:01.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5724' Jul 20 21:07:01.545: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 20 21:07:01.545: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jul 20 21:07:01.551: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 20 21:07:01.558: INFO: scanned /root for discovery docs: Jul 20 21:07:01.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5724' Jul 20 21:07:17.387: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 20 21:07:17.387: INFO: stdout: "Created e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a\nScaling up e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jul 20 21:07:17.387: INFO: stdout: "Created e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a\nScaling up e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jul 20 21:07:17.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5724' Jul 20 21:07:17.494: INFO: stderr: "" Jul 20 21:07:17.494: INFO: stdout: "e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a-ms6mc e2e-test-httpd-rc-jd8hw " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Jul 20 21:07:22.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5724' Jul 20 21:07:22.597: INFO: stderr: "" Jul 20 21:07:22.597: INFO: stdout: "e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a-ms6mc " Jul 20 21:07:22.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a-ms6mc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5724' Jul 20 21:07:22.704: INFO: stderr: "" Jul 20 21:07:22.704: INFO: stdout: "true" Jul 20 21:07:22.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a-ms6mc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5724' Jul 20 21:07:22.795: INFO: stderr: "" Jul 20 21:07:22.795: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jul 20 21:07:22.795: INFO: e2e-test-httpd-rc-b9b8f154d85a37058da55777f033e37a-ms6mc is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Jul 20 21:07:22.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5724' Jul 20 21:07:22.893: INFO: stderr: "" Jul 20 21:07:22.894: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:07:22.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5724" for this suite. • [SLOW TEST:21.627 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":36,"skipped":497,"failed":0} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:07:22.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:07:39.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3395" for this suite. • [SLOW TEST:16.136 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":37,"skipped":501,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:07:39.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 20 21:07:40.282: INFO: Pod name wrapped-volume-race-f4a8e4c4-94ce-437f-bcf1-881047006b84: Found 0 pods out of 5 Jul 20 21:07:45.291: INFO: Pod name wrapped-volume-race-f4a8e4c4-94ce-437f-bcf1-881047006b84: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f4a8e4c4-94ce-437f-bcf1-881047006b84 in namespace emptydir-wrapper-3027, will wait for the garbage collector to delete the pods Jul 20 21:07:59.385: INFO: Deleting ReplicationController wrapped-volume-race-f4a8e4c4-94ce-437f-bcf1-881047006b84 took: 6.745845ms Jul 20 21:07:59.485: INFO: Terminating ReplicationController wrapped-volume-race-f4a8e4c4-94ce-437f-bcf1-881047006b84 pods took: 100.252197ms STEP: Creating RC which spawns configmap-volume pods Jul 20 21:08:17.725: INFO: Pod name wrapped-volume-race-23c61410-347c-4dfa-9a1b-1f9581bcc05d: Found 0 pods out of 5 Jul 20 21:08:22.733: INFO: Pod name wrapped-volume-race-23c61410-347c-4dfa-9a1b-1f9581bcc05d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-23c61410-347c-4dfa-9a1b-1f9581bcc05d in namespace emptydir-wrapper-3027, will wait for the garbage collector to delete the pods Jul 20 21:08:36.898: INFO: Deleting ReplicationController wrapped-volume-race-23c61410-347c-4dfa-9a1b-1f9581bcc05d took: 8.449347ms Jul 20 21:08:37.299: INFO: Terminating ReplicationController wrapped-volume-race-23c61410-347c-4dfa-9a1b-1f9581bcc05d pods took: 400.326418ms STEP: Creating RC which spawns configmap-volume pods Jul 20 21:08:48.555: INFO: Pod name wrapped-volume-race-3200cf29-9bfa-42d1-9357-10ed3a43712d: Found 0 pods out of 5 Jul 20 21:08:53.563: INFO: Pod name wrapped-volume-race-3200cf29-9bfa-42d1-9357-10ed3a43712d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3200cf29-9bfa-42d1-9357-10ed3a43712d in namespace emptydir-wrapper-3027, will wait for the garbage collector to delete the pods Jul 20 21:09:07.647: INFO: Deleting ReplicationController wrapped-volume-race-3200cf29-9bfa-42d1-9357-10ed3a43712d took: 7.444695ms Jul 20 21:09:08.047: INFO: Terminating ReplicationController wrapped-volume-race-3200cf29-9bfa-42d1-9357-10ed3a43712d pods took: 400.33622ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:09:18.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3027" for this suite. • [SLOW TEST:99.538 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":38,"skipped":502,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:09:18.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 21:09:19.451: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 21:09:21.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876159, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876159, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876159, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876159, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 21:09:24.493: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:09:24.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:09:25.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6884" for this suite. STEP: Destroying namespace "webhook-6884-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.894 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":39,"skipped":502,"failed":0} [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:09:25.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-9259583e-0e23-4df3-84d3-70d87f88d4ac [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:09:25.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4830" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":40,"skipped":502,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:09:25.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-4wnf STEP: Creating a pod to test atomic-volume-subpath Jul 20 21:09:26.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4wnf" in namespace "subpath-4485" to be "success or failure" Jul 20 21:09:26.736: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Pending", Reason="", readiness=false. Elapsed: 69.872337ms Jul 20 21:09:28.740: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074422117s Jul 20 21:09:30.743: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 4.077458083s Jul 20 21:09:32.747: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 6.081361614s Jul 20 21:09:34.751: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 8.08551381s Jul 20 21:09:36.761: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 10.0949321s Jul 20 21:09:38.765: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 12.099413401s Jul 20 21:09:40.770: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 14.103728387s Jul 20 21:09:42.774: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 16.108125924s Jul 20 21:09:44.778: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 18.112272584s Jul 20 21:09:46.784: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 20.117847337s Jul 20 21:09:48.788: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Running", Reason="", readiness=true. Elapsed: 22.121935287s Jul 20 21:09:50.792: INFO: Pod "pod-subpath-test-configmap-4wnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.126003476s STEP: Saw pod success Jul 20 21:09:50.792: INFO: Pod "pod-subpath-test-configmap-4wnf" satisfied condition "success or failure" Jul 20 21:09:50.794: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-4wnf container test-container-subpath-configmap-4wnf: STEP: delete the pod Jul 20 21:09:50.839: INFO: Waiting for pod pod-subpath-test-configmap-4wnf to disappear Jul 20 21:09:50.857: INFO: Pod pod-subpath-test-configmap-4wnf no longer exists STEP: Deleting pod pod-subpath-test-configmap-4wnf Jul 20 21:09:50.857: INFO: Deleting pod "pod-subpath-test-configmap-4wnf" in namespace "subpath-4485" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:09:50.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4485" for this suite. • [SLOW TEST:25.171 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":41,"skipped":519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:09:50.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 20 21:09:50.966: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:10:07.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3229" for this suite. • [SLOW TEST:16.490 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":551,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:10:07.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 20 21:10:08.229: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 20 21:10:10.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876208, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876208, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876208, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876208, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 20 21:10:13.431: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:10:13.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8308" for this suite. STEP: Destroying namespace "webhook-8308-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.523 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":43,"skipped":557,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:10:13.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7146 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7146 STEP: creating replication controller externalsvc in namespace services-7146 I0720 21:10:14.069099 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7146, replica count: 2 I0720 21:10:17.119604 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0720 21:10:20.119840 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 20 21:10:20.153: INFO: Creating new exec pod Jul 20 21:10:24.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7146 execpodb8z4h -- /bin/sh -x -c nslookup clusterip-service' Jul 20 21:10:24.366: INFO: stderr: "I0720 21:10:24.300977 1229 log.go:172] (0xc00035c6e0) (0xc0002f21e0) Create stream\nI0720 21:10:24.301022 1229 log.go:172] (0xc00035c6e0) (0xc0002f21e0) Stream added, broadcasting: 1\nI0720 21:10:24.302851 1229 log.go:172] (0xc00035c6e0) Reply frame received for 1\nI0720 21:10:24.302890 1229 log.go:172] (0xc00035c6e0) (0xc00043c0a0) Create stream\nI0720 21:10:24.302901 1229 log.go:172] (0xc00035c6e0) (0xc00043c0a0) Stream added, broadcasting: 3\nI0720 21:10:24.303764 1229 log.go:172] (0xc00035c6e0) Reply frame received for 3\nI0720 21:10:24.303804 1229 log.go:172] (0xc00035c6e0) (0xc0002f2280) Create stream\nI0720 21:10:24.303818 1229 log.go:172] (0xc00035c6e0) (0xc0002f2280) Stream added, broadcasting: 5\nI0720 21:10:24.304673 1229 log.go:172] (0xc00035c6e0) Reply frame received for 5\nI0720 21:10:24.354398 1229 log.go:172] (0xc00035c6e0) Data frame received for 5\nI0720 21:10:24.354428 1229 log.go:172] (0xc0002f2280) (5) Data frame handling\nI0720 21:10:24.354449 1229 log.go:172] (0xc0002f2280) (5) Data frame sent\n+ nslookup clusterip-service\nI0720 21:10:24.359138 1229 log.go:172] (0xc00035c6e0) Data frame received for 3\nI0720 21:10:24.359152 1229 log.go:172] (0xc00043c0a0) (3) Data frame handling\nI0720 21:10:24.359163 1229 log.go:172] (0xc00043c0a0) (3) Data frame sent\nI0720 21:10:24.360046 1229 log.go:172] (0xc00035c6e0) Data frame received for 3\nI0720 21:10:24.360061 1229 log.go:172] (0xc00043c0a0) (3) Data frame handling\nI0720 21:10:24.360071 1229 log.go:172] (0xc00043c0a0) (3) Data frame sent\nI0720 21:10:24.360519 1229 log.go:172] (0xc00035c6e0) Data frame received for 5\nI0720 21:10:24.360547 1229 log.go:172] (0xc0002f2280) (5) Data frame handling\nI0720 21:10:24.360567 1229 log.go:172] (0xc00035c6e0) Data frame received for 3\nI0720 21:10:24.360584 1229 log.go:172] (0xc00043c0a0) (3) Data frame handling\nI0720 21:10:24.361952 1229 log.go:172] (0xc00035c6e0) Data frame received for 1\nI0720 21:10:24.361978 1229 log.go:172] (0xc0002f21e0) (1) Data frame handling\nI0720 21:10:24.361996 1229 log.go:172] (0xc0002f21e0) (1) Data frame sent\nI0720 21:10:24.362014 1229 log.go:172] (0xc00035c6e0) (0xc0002f21e0) Stream removed, broadcasting: 1\nI0720 21:10:24.362037 1229 log.go:172] (0xc00035c6e0) Go away received\nI0720 21:10:24.362314 1229 log.go:172] (0xc00035c6e0) (0xc0002f21e0) Stream removed, broadcasting: 1\nI0720 21:10:24.362332 1229 log.go:172] (0xc00035c6e0) (0xc00043c0a0) Stream removed, broadcasting: 3\nI0720 21:10:24.362341 1229 log.go:172] (0xc00035c6e0) (0xc0002f2280) Stream removed, broadcasting: 5\n" Jul 20 21:10:24.366: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7146.svc.cluster.local\tcanonical name = externalsvc.services-7146.svc.cluster.local.\nName:\texternalsvc.services-7146.svc.cluster.local\nAddress: 10.103.116.114\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7146, will wait for the garbage collector to delete the pods Jul 20 21:10:24.428: INFO: Deleting ReplicationController externalsvc took: 6.718509ms Jul 20 21:10:24.528: INFO: Terminating ReplicationController externalsvc pods took: 100.207121ms Jul 20 21:10:37.542: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:10:37.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7146" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.683 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":44,"skipped":559,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:10:37.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7256/secret-test-2ff378fe-5217-448f-a83a-5260264f3ce6 STEP: Creating a pod to test consume secrets Jul 20 21:10:37.649: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480" in namespace "secrets-7256" to be "success or failure" Jul 20 21:10:37.675: INFO: Pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480": Phase="Pending", Reason="", readiness=false. Elapsed: 26.134363ms Jul 20 21:10:39.679: INFO: Pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030056952s Jul 20 21:10:41.683: INFO: Pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480": Phase="Running", Reason="", readiness=true. Elapsed: 4.034228833s Jul 20 21:10:43.691: INFO: Pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041830578s STEP: Saw pod success Jul 20 21:10:43.691: INFO: Pod "pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480" satisfied condition "success or failure" Jul 20 21:10:43.696: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480 container env-test: STEP: delete the pod Jul 20 21:10:43.744: INFO: Waiting for pod pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480 to disappear Jul 20 21:10:43.750: INFO: Pod pod-configmaps-3d70c0a6-6ff6-44ad-95ec-fd584badd480 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:10:43.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7256" for this suite. • [SLOW TEST:6.192 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":565,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:10:43.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 20 21:10:43.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4362 /api/v1/namespaces/watch-4362/configmaps/e2e-watch-test-resource-version d6a0d99f-2936-4981-a67c-4e22f5457a69 2864621 0 2020-07-20 21:10:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 20 21:10:43.837: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4362 /api/v1/namespaces/watch-4362/configmaps/e2e-watch-test-resource-version d6a0d99f-2936-4981-a67c-4e22f5457a69 2864622 0 2020-07-20 21:10:43 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:10:43.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4362" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":46,"skipped":567,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:10:43.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:11:17.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1684" for this suite. • [SLOW TEST:33.720 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":573,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:11:17.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-lngn STEP: Creating a pod to test atomic-volume-subpath Jul 20 21:11:17.975: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lngn" in namespace "subpath-5408" to be "success or failure" Jul 20 21:11:17.997: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Pending", Reason="", readiness=false. Elapsed: 21.707636ms Jul 20 21:11:20.010: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03530106s Jul 20 21:11:22.017: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 4.041737349s Jul 20 21:11:24.021: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 6.045842677s Jul 20 21:11:26.025: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 8.050126947s Jul 20 21:11:28.029: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 10.054088857s Jul 20 21:11:30.032: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 12.057520453s Jul 20 21:11:32.036: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 14.061226579s Jul 20 21:11:34.040: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 16.065334307s Jul 20 21:11:36.044: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 18.068850068s Jul 20 21:11:38.067: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 20.091996997s Jul 20 21:11:40.071: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Running", Reason="", readiness=true. Elapsed: 22.096200797s Jul 20 21:11:42.075: INFO: Pod "pod-subpath-test-projected-lngn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.100101687s STEP: Saw pod success Jul 20 21:11:42.075: INFO: Pod "pod-subpath-test-projected-lngn" satisfied condition "success or failure" Jul 20 21:11:42.078: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-lngn container test-container-subpath-projected-lngn: STEP: delete the pod Jul 20 21:11:42.125: INFO: Waiting for pod pod-subpath-test-projected-lngn to disappear Jul 20 21:11:42.142: INFO: Pod pod-subpath-test-projected-lngn no longer exists STEP: Deleting pod pod-subpath-test-projected-lngn Jul 20 21:11:42.142: INFO: Deleting pod "pod-subpath-test-projected-lngn" in namespace "subpath-5408" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:11:42.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5408" for this suite. • [SLOW TEST:24.559 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":48,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:11:42.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:11:42.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755" in namespace "projected-6630" to be "success or failure" Jul 20 21:11:42.291: INFO: Pod "downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755": Phase="Pending", Reason="", readiness=false. Elapsed: 16.691011ms Jul 20 21:11:44.295: INFO: Pod "downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020772549s Jul 20 21:11:46.299: INFO: Pod "downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024352063s STEP: Saw pod success Jul 20 21:11:46.299: INFO: Pod "downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755" satisfied condition "success or failure" Jul 20 21:11:46.301: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755 container client-container: STEP: delete the pod Jul 20 21:11:46.321: INFO: Waiting for pod downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755 to disappear Jul 20 21:11:46.326: INFO: Pod downwardapi-volume-bfacf37c-2628-46c3-afb8-b83f4bbce755 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:11:46.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6630" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":619,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:11:46.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 20 21:11:46.490: INFO: Waiting up to 5m0s for pod "pod-ed510732-fe9d-4bfb-a39a-17054794dfce" in namespace "emptydir-3981" to be "success or failure" Jul 20 21:11:46.493: INFO: Pod "pod-ed510732-fe9d-4bfb-a39a-17054794dfce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422845ms Jul 20 21:11:48.504: INFO: Pod "pod-ed510732-fe9d-4bfb-a39a-17054794dfce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014132789s Jul 20 21:11:50.508: INFO: Pod "pod-ed510732-fe9d-4bfb-a39a-17054794dfce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01804444s STEP: Saw pod success Jul 20 21:11:50.508: INFO: Pod "pod-ed510732-fe9d-4bfb-a39a-17054794dfce" satisfied condition "success or failure" Jul 20 21:11:50.511: INFO: Trying to get logs from node jerma-worker pod pod-ed510732-fe9d-4bfb-a39a-17054794dfce container test-container: STEP: delete the pod Jul 20 21:11:50.525: INFO: Waiting for pod pod-ed510732-fe9d-4bfb-a39a-17054794dfce to disappear Jul 20 21:11:50.606: INFO: Pod pod-ed510732-fe9d-4bfb-a39a-17054794dfce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:11:50.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3981" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:11:50.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6791 to expose endpoints map[] Jul 20 21:11:50.760: INFO: Get endpoints failed (9.917568ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jul 20 21:11:51.764: INFO: successfully validated that service endpoint-test2 in namespace services-6791 exposes endpoints map[] (1.013874973s elapsed) STEP: Creating pod pod1 in namespace services-6791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6791 to expose endpoints map[pod1:[80]] Jul 20 21:11:54.845: INFO: successfully validated that service endpoint-test2 in namespace services-6791 exposes endpoints map[pod1:[80]] (3.050899466s elapsed) STEP: Creating pod pod2 in namespace services-6791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6791 to expose endpoints map[pod1:[80] pod2:[80]] Jul 20 21:11:58.911: INFO: successfully validated that service endpoint-test2 in namespace services-6791 exposes endpoints map[pod1:[80] pod2:[80]] (4.061695871s elapsed) STEP: Deleting pod pod1 in namespace services-6791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6791 to expose endpoints map[pod2:[80]] Jul 20 21:11:58.957: INFO: successfully validated that service endpoint-test2 in namespace services-6791 exposes endpoints map[pod2:[80]] (30.035081ms elapsed) STEP: Deleting pod pod2 in namespace services-6791 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6791 to expose endpoints map[] Jul 20 21:11:59.978: INFO: successfully validated that service endpoint-test2 in namespace services-6791 exposes endpoints map[] (1.01581284s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:12:00.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6791" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.499 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":51,"skipped":653,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:12:00.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4116b1d3-21d1-4b4a-83a8-a2327e0151f7 STEP: Creating a pod to test consume configMaps Jul 20 21:12:00.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352" in namespace "configmap-1197" to be "success or failure" Jul 20 21:12:00.329: INFO: Pod "pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352": Phase="Pending", Reason="", readiness=false. Elapsed: 9.814636ms Jul 20 21:12:02.355: INFO: Pod "pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035834693s Jul 20 21:12:04.358: INFO: Pod "pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039526988s STEP: Saw pod success Jul 20 21:12:04.358: INFO: Pod "pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352" satisfied condition "success or failure" Jul 20 21:12:04.361: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352 container configmap-volume-test: STEP: delete the pod Jul 20 21:12:04.382: INFO: Waiting for pod pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352 to disappear Jul 20 21:12:04.386: INFO: Pod pod-configmaps-6397c1a0-3e14-45c1-8ce8-5d86ff908352 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:12:04.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1197" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:12:04.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:12:04.477: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 20 21:12:04.512: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 20 21:12:09.519: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 21:12:09.519: INFO: Creating deployment "test-rolling-update-deployment" Jul 20 21:12:09.531: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 20 21:12:09.543: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 20 21:12:11.550: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 20 21:12:11.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876329, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876329, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876329, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 21:12:13.722: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 20 21:12:13.802: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9102 /apis/apps/v1/namespaces/deployment-9102/deployments/test-rolling-update-deployment 64826f08-2ffa-46e3-ae9e-f736d527bc76 2865168 1 2020-07-20 21:12:09 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cde208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 21:12:09 +0000 UTC,LastTransitionTime:2020-07-20 21:12:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-07-20 21:12:13 +0000 UTC,LastTransitionTime:2020-07-20 21:12:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 20 21:12:13.804: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-9102 /apis/apps/v1/namespaces/deployment-9102/replicasets/test-rolling-update-deployment-67cf4f6444 1d271eed-0b49-42f7-af2b-b5e8aedc2f02 2865158 1 2020-07-20 21:12:09 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 64826f08-2ffa-46e3-ae9e-f736d527bc76 0xc001cdee27 0xc001cdee28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001cdeee8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 21:12:13.804: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 20 21:12:13.804: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9102 /apis/apps/v1/namespaces/deployment-9102/replicasets/test-rolling-update-controller 38e04630-5f49-4ba7-aa71-0e9844ecade4 2865167 2 2020-07-20 21:12:04 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 64826f08-2ffa-46e3-ae9e-f736d527bc76 0xc001cdec77 0xc001cdec78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001cded78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 21:12:13.808: INFO: Pod "test-rolling-update-deployment-67cf4f6444-7dlfs" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-7dlfs test-rolling-update-deployment-67cf4f6444- deployment-9102 /api/v1/namespaces/deployment-9102/pods/test-rolling-update-deployment-67cf4f6444-7dlfs a0c27e75-fd46-4574-afe9-07239409de5b 2865157 0 2020-07-20 21:12:09 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 1d271eed-0b49-42f7-af2b-b5e8aedc2f02 0xc00287c117 0xc00287c118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rt6z7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rt6z7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rt6z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:12:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:12:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:12:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:12:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.65,StartTime:2020-07-20 21:12:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 21:12:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://159bfb70a92952a32c0342162ae7c897e86d5da9e0e68053652239cb4376c301,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:12:13.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9102" for this suite. • [SLOW TEST:9.422 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":53,"skipped":689,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:12:13.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-89d88514-904c-4f44-94bb-7bb05e337ebd STEP: Creating a pod to test consume secrets Jul 20 21:12:13.953: INFO: Waiting up to 5m0s for pod "pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965" in namespace "secrets-3179" to be "success or failure" Jul 20 21:12:14.110: INFO: Pod "pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965": Phase="Pending", Reason="", readiness=false. Elapsed: 157.350776ms Jul 20 21:12:16.199: INFO: Pod "pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246275577s Jul 20 21:12:18.217: INFO: Pod "pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.264385425s STEP: Saw pod success Jul 20 21:12:18.217: INFO: Pod "pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965" satisfied condition "success or failure" Jul 20 21:12:18.220: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965 container secret-volume-test: STEP: delete the pod Jul 20 21:12:18.240: INFO: Waiting for pod pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965 to disappear Jul 20 21:12:18.275: INFO: Pod pod-secrets-d37c22f7-d7b3-408e-8b25-a103d3042965 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:12:18.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3179" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":706,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:12:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 20 21:12:24.911: INFO: Successfully updated pod "adopt-release-vdb4r" STEP: Checking that the Job readopts the Pod Jul 20 21:12:24.912: INFO: Waiting up to 15m0s for pod "adopt-release-vdb4r" in namespace "job-4497" to be "adopted" Jul 20 21:12:24.920: INFO: Pod "adopt-release-vdb4r": Phase="Running", Reason="", readiness=true. Elapsed: 8.175549ms Jul 20 21:12:26.942: INFO: Pod "adopt-release-vdb4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.030264542s Jul 20 21:12:26.942: INFO: Pod "adopt-release-vdb4r" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 20 21:12:27.449: INFO: Successfully updated pod "adopt-release-vdb4r" STEP: Checking that the Job releases the Pod Jul 20 21:12:27.449: INFO: Waiting up to 15m0s for pod "adopt-release-vdb4r" in namespace "job-4497" to be "released" Jul 20 21:12:27.459: INFO: Pod "adopt-release-vdb4r": Phase="Running", Reason="", readiness=true. Elapsed: 9.542735ms Jul 20 21:12:29.541: INFO: Pod "adopt-release-vdb4r": Phase="Running", Reason="", readiness=true. Elapsed: 2.091297356s Jul 20 21:12:29.541: INFO: Pod "adopt-release-vdb4r" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:12:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4497" for this suite. • [SLOW TEST:11.264 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":55,"skipped":730,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:12:29.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:12:29.786: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jul 20 21:12:29.938: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:29.940: INFO: Number of nodes with available pods: 0 Jul 20 21:12:29.940: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:30.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:30.949: INFO: Number of nodes with available pods: 0 Jul 20 21:12:30.949: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:31.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:31.949: INFO: Number of nodes with available pods: 0 Jul 20 21:12:31.949: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:32.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:32.949: INFO: Number of nodes with available pods: 0 Jul 20 21:12:32.949: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:33.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:33.950: INFO: Number of nodes with available pods: 1 Jul 20 21:12:33.950: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:34.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:34.953: INFO: Number of nodes with available pods: 2 Jul 20 21:12:34.953: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jul 20 21:12:35.019: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:35.019: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:35.074: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:36.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:36.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:36.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:37.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:37.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:37.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:38.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:38.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:38.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:38.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:39.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:39.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:39.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:39.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:40.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:40.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:40.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:40.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:41.086: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:41.086: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:41.086: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:41.090: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:42.079: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:42.079: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:42.079: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:42.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:43.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:43.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:43.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:43.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:44.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:44.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:44.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:44.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:45.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:45.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:45.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:45.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:46.078: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:46.078: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:46.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:46.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:47.077: INFO: Wrong image for pod: daemon-set-4ntcv. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:47.077: INFO: Pod daemon-set-4ntcv is not available Jul 20 21:12:47.077: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:47.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:48.078: INFO: Pod daemon-set-24bcv is not available Jul 20 21:12:48.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:48.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:49.089: INFO: Pod daemon-set-24bcv is not available Jul 20 21:12:49.089: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:49.315: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:50.078: INFO: Pod daemon-set-24bcv is not available Jul 20 21:12:50.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:50.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:51.079: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:51.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:52.083: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:52.083: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:52.086: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:53.077: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:53.077: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:53.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:54.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:54.078: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:54.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:55.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:55.078: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:55.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:56.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:56.078: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:56.083: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:57.078: INFO: Wrong image for pod: daemon-set-llv5l. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jul 20 21:12:57.078: INFO: Pod daemon-set-llv5l is not available Jul 20 21:12:57.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:58.078: INFO: Pod daemon-set-69r6r is not available Jul 20 21:12:58.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jul 20 21:12:58.085: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:58.088: INFO: Number of nodes with available pods: 1 Jul 20 21:12:58.088: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:12:59.291: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:12:59.295: INFO: Number of nodes with available pods: 1 Jul 20 21:12:59.295: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:13:00.093: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:13:00.096: INFO: Number of nodes with available pods: 1 Jul 20 21:13:00.096: INFO: Node jerma-worker is running more than one daemon pod Jul 20 21:13:01.104: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 20 21:13:01.107: INFO: Number of nodes with available pods: 2 Jul 20 21:13:01.107: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1536, will wait for the garbage collector to delete the pods Jul 20 21:13:01.186: INFO: Deleting DaemonSet.extensions daemon-set took: 12.043512ms Jul 20 21:13:01.486: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.288768ms Jul 20 21:13:07.590: INFO: Number of nodes with available pods: 0 Jul 20 21:13:07.590: INFO: Number of running nodes: 0, number of available pods: 0 Jul 20 21:13:07.593: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1536/daemonsets","resourceVersion":"2865519"},"items":null} Jul 20 21:13:07.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1536/pods","resourceVersion":"2865519"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:13:07.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1536" for this suite. • [SLOW TEST:38.066 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":56,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:13:07.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:13:31.757: INFO: Container started at 2020-07-20 21:13:10 +0000 UTC, pod became ready at 2020-07-20 21:13:31 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:13:31.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8947" for this suite. • [SLOW TEST:24.154 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":768,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:13:31.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:13:48.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9999" for this suite. • [SLOW TEST:16.253 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":58,"skipped":769,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:13:48.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 20 21:13:48.132: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:13:55.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5269" for this suite. • [SLOW TEST:7.220 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":59,"skipped":769,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:13:55.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 20 21:13:55.284: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Jul 20 21:13:55.709: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 20 21:13:57.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 21:13:59.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876435, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 20 21:14:02.541: INFO: Waited 621.037583ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:02.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5251" for this suite. • [SLOW TEST:7.835 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":60,"skipped":777,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:03.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 20 21:14:03.707: INFO: Waiting up to 5m0s for pod "downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23" in namespace "downward-api-1651" to be "success or failure" Jul 20 21:14:03.831: INFO: Pod "downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23": Phase="Pending", Reason="", readiness=false. Elapsed: 124.088357ms Jul 20 21:14:05.834: INFO: Pod "downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12729839s Jul 20 21:14:07.842: INFO: Pod "downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135443959s STEP: Saw pod success Jul 20 21:14:07.843: INFO: Pod "downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23" satisfied condition "success or failure" Jul 20 21:14:07.845: INFO: Trying to get logs from node jerma-worker2 pod downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23 container dapi-container: STEP: delete the pod Jul 20 21:14:07.885: INFO: Waiting for pod downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23 to disappear Jul 20 21:14:07.913: INFO: Pod downward-api-9ad6581d-0186-4661-9f85-1aa9b0d7af23 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:07.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1651" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:07.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:14:08.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806" in namespace "downward-api-8902" to be "success or failure" Jul 20 21:14:08.028: INFO: Pod "downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806": Phase="Pending", Reason="", readiness=false. Elapsed: 9.718093ms Jul 20 21:14:10.172: INFO: Pod "downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153676353s Jul 20 21:14:12.175: INFO: Pod "downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157191075s STEP: Saw pod success Jul 20 21:14:12.175: INFO: Pod "downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806" satisfied condition "success or failure" Jul 20 21:14:12.178: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806 container client-container: STEP: delete the pod Jul 20 21:14:12.251: INFO: Waiting for pod downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806 to disappear Jul 20 21:14:12.261: INFO: Pod downwardapi-volume-e49e56fa-1a92-41c8-b8d8-9a9da9336806 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:12.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8902" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":813,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:12.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:14:12.390: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:12.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5140" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":63,"skipped":825,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:12.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-7a6a4aac-62ac-4a33-a8ac-926a9eda596c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:13.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8829" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":64,"skipped":832,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:13.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-ed306acd-af35-4ea8-8332-45e6a1c3816a STEP: Creating a pod to test consume secrets Jul 20 21:14:13.140: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf" in namespace "projected-617" to be "success or failure" Jul 20 21:14:13.201: INFO: Pod "pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf": Phase="Pending", Reason="", readiness=false. Elapsed: 61.277814ms Jul 20 21:14:15.205: INFO: Pod "pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065029333s Jul 20 21:14:17.209: INFO: Pod "pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068998584s STEP: Saw pod success Jul 20 21:14:17.209: INFO: Pod "pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf" satisfied condition "success or failure" Jul 20 21:14:17.212: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf container projected-secret-volume-test: STEP: delete the pod Jul 20 21:14:17.242: INFO: Waiting for pod pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf to disappear Jul 20 21:14:17.276: INFO: Pod pod-projected-secrets-e0f78478-c7ff-45b2-a7c7-c7c46d1712bf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:17.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-617" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":839,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:17.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:22.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3458" for this suite. • [SLOW TEST:5.356 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":66,"skipped":844,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:22.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:14:38.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9295" for this suite. • [SLOW TEST:16.187 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":67,"skipped":850,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:14:38.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:14:45.031: INFO: DNS probes using dns-test-9a0c281f-ce03-48cd-9efe-f0256a9ce59e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:14:51.184: INFO: File wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:14:51.187: INFO: File jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:14:51.187: INFO: Lookups using dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a failed for: [wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local] Jul 20 21:14:56.193: INFO: File wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:14:56.196: INFO: File jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:14:56.196: INFO: Lookups using dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a failed for: [wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local] Jul 20 21:15:01.192: INFO: File wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:01.195: INFO: File jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:01.195: INFO: Lookups using dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a failed for: [wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local] Jul 20 21:15:06.192: INFO: File wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:06.195: INFO: File jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:06.195: INFO: Lookups using dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a failed for: [wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local] Jul 20 21:15:11.192: INFO: File wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:11.195: INFO: File jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local from pod dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a contains 'foo.example.com. ' instead of 'bar.example.com.' Jul 20 21:15:11.195: INFO: Lookups using dns-4829/dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a failed for: [wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local] Jul 20 21:15:16.195: INFO: DNS probes using dns-test-f6671c08-d80c-4920-b10d-934dcab3eb2a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4829.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4829.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:15:24.909: INFO: DNS probes using dns-test-d4385dc9-621c-4a6f-a0cd-3c04190330f4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:15:25.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4829" for this suite. • [SLOW TEST:46.219 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":68,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:15:25.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 20 21:15:33.572: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:33.587: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:35.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:35.591: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:37.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:37.592: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:39.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:39.591: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:41.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:41.591: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:43.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:43.590: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:45.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:45.592: INFO: Pod pod-with-prestop-http-hook still exists Jul 20 21:15:47.587: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 20 21:15:47.592: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:15:47.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1703" for this suite. • [SLOW TEST:22.571 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":874,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:15:47.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0720 21:15:57.753469 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 21:15:57.753: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:15:57.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7902" for this suite. • [SLOW TEST:10.143 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":70,"skipped":880,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:15:57.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-bff33989-5761-42f8-b095-aa17c6369b69 STEP: Creating a pod to test consume configMaps Jul 20 21:15:57.889: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081" in namespace "projected-6158" to be "success or failure" Jul 20 21:15:57.902: INFO: Pod "pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081": Phase="Pending", Reason="", readiness=false. Elapsed: 12.759268ms Jul 20 21:15:59.907: INFO: Pod "pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017888654s Jul 20 21:16:01.993: INFO: Pod "pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104137783s STEP: Saw pod success Jul 20 21:16:01.993: INFO: Pod "pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081" satisfied condition "success or failure" Jul 20 21:16:01.996: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081 container projected-configmap-volume-test: STEP: delete the pod Jul 20 21:16:02.033: INFO: Waiting for pod pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081 to disappear Jul 20 21:16:02.066: INFO: Pod pod-projected-configmaps-99ee2fea-abb2-40c6-8812-9e4323f87081 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:02.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6158" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":895,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:02.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 20 21:16:05.500: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:05.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4965" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":904,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:05.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:05.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-227" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":73,"skipped":918,"failed":0} ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:05.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8921 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8921;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8921 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8921;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8921.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8921.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8921.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8921.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8921.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8921.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8921.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 245.193.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.193.245_udp@PTR;check="$$(dig +tcp +noall +answer +search 245.193.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.193.245_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8921 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8921;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8921 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8921;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8921.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8921.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8921.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8921.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8921.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8921.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8921.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8921.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8921.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 245.193.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.193.245_udp@PTR;check="$$(dig +tcp +noall +answer +search 245.193.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.193.245_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 20 21:16:14.065: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.068: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.074: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.077: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.079: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.082: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.085: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.102: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.105: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.107: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.110: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.113: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.116: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.119: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:14.137: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:19.142: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.146: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.149: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.154: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.189: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.195: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.200: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.262: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.270: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:19.287: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:24.142: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.145: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.148: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.154: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.158: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.161: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.164: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.184: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.187: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.189: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.192: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.194: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.199: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.201: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:24.217: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:29.141: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.144: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.147: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.153: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.158: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.177: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.181: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.186: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.188: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:29.206: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:34.142: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.146: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.149: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.153: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.155: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.158: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.160: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.179: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.181: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.183: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.188: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.191: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:34.212: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:39.142: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.146: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.154: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.157: INFO: Unable to read wheezy_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.159: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.161: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.188: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.194: INFO: Unable to read jessie_udp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.197: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921 from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.199: INFO: Unable to read jessie_udp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.205: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.208: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc from pod dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce: the server could not find the requested resource (get pods dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce) Jul 20 21:16:39.225: INFO: Lookups using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8921 wheezy_tcp@dns-test-service.dns-8921 wheezy_udp@dns-test-service.dns-8921.svc wheezy_tcp@dns-test-service.dns-8921.svc wheezy_udp@_http._tcp.dns-test-service.dns-8921.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8921.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8921 jessie_tcp@dns-test-service.dns-8921 jessie_udp@dns-test-service.dns-8921.svc jessie_tcp@dns-test-service.dns-8921.svc jessie_udp@_http._tcp.dns-test-service.dns-8921.svc jessie_tcp@_http._tcp.dns-test-service.dns-8921.svc] Jul 20 21:16:44.235: INFO: DNS probes using dns-8921/dns-test-692e1b7e-b79c-4e2d-ba34-e3bd65b5dfce succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:44.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8921" for this suite. • [SLOW TEST:39.227 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":74,"skipped":918,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:44.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:16:45.035: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 20 21:16:46.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7768 create -f -' Jul 20 21:16:50.681: INFO: stderr: "" Jul 20 21:16:50.681: INFO: stdout: "e2e-test-crd-publish-openapi-9461-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 20 21:16:50.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7768 delete e2e-test-crd-publish-openapi-9461-crds test-cr' Jul 20 21:16:50.795: INFO: stderr: "" Jul 20 21:16:50.795: INFO: stdout: "e2e-test-crd-publish-openapi-9461-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jul 20 21:16:50.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7768 apply -f -' Jul 20 21:16:51.079: INFO: stderr: "" Jul 20 21:16:51.079: INFO: stdout: "e2e-test-crd-publish-openapi-9461-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jul 20 21:16:51.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7768 delete e2e-test-crd-publish-openapi-9461-crds test-cr' Jul 20 21:16:51.182: INFO: stderr: "" Jul 20 21:16:51.182: INFO: stdout: "e2e-test-crd-publish-openapi-9461-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jul 20 21:16:51.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9461-crds' Jul 20 21:16:51.436: INFO: stderr: "" Jul 20 21:16:51.436: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9461-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:54.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7768" for this suite. • [SLOW TEST:9.463 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":75,"skipped":927,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:54.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:16:54.387: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 20 21:16:59.398: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 20 21:16:59.398: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Jul 20 21:16:59.421: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2564 /apis/apps/v1/namespaces/deployment-2564/deployments/test-cleanup-deployment 6983736c-7270-4004-917c-0fc54e908188 2867044 1 2020-07-20 21:16:59 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00346e8e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 20 21:16:59.476: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-2564 /apis/apps/v1/namespaces/deployment-2564/replicasets/test-cleanup-deployment-55ffc6b7b6 0fa5b337-627c-48c0-a859-3ab328211254 2867046 1 2020-07-20 21:16:59 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6983736c-7270-4004-917c-0fc54e908188 0xc00346f277 0xc00346f278}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00346f2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 20 21:16:59.476: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 20 21:16:59.477: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2564 /apis/apps/v1/namespaces/deployment-2564/replicasets/test-cleanup-controller 700b935f-b3f9-474c-a8ec-9f4d54969540 2867045 1 2020-07-20 21:16:54 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 6983736c-7270-4004-917c-0fc54e908188 0xc00346f16f 0xc00346f180}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00346f208 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 20 21:16:59.538: INFO: Pod "test-cleanup-controller-l978p" is available: &Pod{ObjectMeta:{test-cleanup-controller-l978p test-cleanup-controller- deployment-2564 /api/v1/namespaces/deployment-2564/pods/test-cleanup-controller-l978p e2bc818b-372c-4d12-81fd-32a5afc1c836 2867033 0 2020-07-20 21:16:54 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 700b935f-b3f9-474c-a8ec-9f4d54969540 0xc00346f8e7 0xc00346f8e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p5dzc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p5dzc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p5dzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:16:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:16:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:16:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:16:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.80,StartTime:2020-07-20 21:16:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 21:16:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://680ea4f0484363ed18471a34127b2c8f80e8ee443147a30504f3d5de24e72c4b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 20 21:16:59.538: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-pxxrc" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-pxxrc test-cleanup-deployment-55ffc6b7b6- deployment-2564 /api/v1/namespaces/deployment-2564/pods/test-cleanup-deployment-55ffc6b7b6-pxxrc d71dcfdd-f3fb-4578-8fb2-f69ba670d505 2867052 0 2020-07-20 21:16:59 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 0fa5b337-627c-48c0-a859-3ab328211254 0xc00346fb17 0xc00346fb18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-p5dzc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-p5dzc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-p5dzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:16:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:16:59.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2564" for this suite. • [SLOW TEST:5.273 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":76,"skipped":944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:16:59.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 20 21:16:59.657: INFO: Waiting up to 5m0s for pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44" in namespace "downward-api-8913" to be "success or failure" Jul 20 21:16:59.727: INFO: Pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44": Phase="Pending", Reason="", readiness=false. Elapsed: 70.177744ms Jul 20 21:17:01.732: INFO: Pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074493124s Jul 20 21:17:03.803: INFO: Pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145806719s Jul 20 21:17:05.807: INFO: Pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150025414s STEP: Saw pod success Jul 20 21:17:05.807: INFO: Pod "downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44" satisfied condition "success or failure" Jul 20 21:17:05.810: INFO: Trying to get logs from node jerma-worker2 pod downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44 container dapi-container: STEP: delete the pod Jul 20 21:17:05.846: INFO: Waiting for pod downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44 to disappear Jul 20 21:17:05.850: INFO: Pod downward-api-a15b74cb-4a99-4117-ab7c-471b48e9fa44 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:05.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8913" for this suite. • [SLOW TEST:6.260 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":983,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:05.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0720 21:17:18.125489 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 20 21:17:18.125: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:18.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5440" for this suite. • [SLOW TEST:12.288 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":78,"skipped":985,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:18.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:18.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8179" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":79,"skipped":993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:18.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 20 21:17:18.255: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:24.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5841" for this suite. • [SLOW TEST:6.145 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":80,"skipped":1039,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:24.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jul 20 21:17:30.588: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jul 20 21:17:40.680: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:40.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2583" for this suite. • [SLOW TEST:16.329 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":81,"skipped":1039,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:40.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:44.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3808" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1042,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:44.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:17:44.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e" in namespace "projected-6744" to be "success or failure" Jul 20 21:17:44.907: INFO: Pod "downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.706206ms Jul 20 21:17:46.911: INFO: Pod "downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016244555s Jul 20 21:17:48.947: INFO: Pod "downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052055596s STEP: Saw pod success Jul 20 21:17:48.947: INFO: Pod "downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e" satisfied condition "success or failure" Jul 20 21:17:48.950: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e container client-container: STEP: delete the pod Jul 20 21:17:48.975: INFO: Waiting for pod downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e to disappear Jul 20 21:17:48.980: INFO: Pod downwardapi-volume-e855f0fa-e7d8-4136-90d0-3fb276a7d94e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6744" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1042,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:48.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 20 21:17:49.046: INFO: Waiting up to 5m0s for pod "pod-1f22648f-863e-4263-a091-a05f4ad88632" in namespace "emptydir-8173" to be "success or failure" Jul 20 21:17:49.096: INFO: Pod "pod-1f22648f-863e-4263-a091-a05f4ad88632": Phase="Pending", Reason="", readiness=false. Elapsed: 50.438231ms Jul 20 21:17:51.342: INFO: Pod "pod-1f22648f-863e-4263-a091-a05f4ad88632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296418131s Jul 20 21:17:53.347: INFO: Pod "pod-1f22648f-863e-4263-a091-a05f4ad88632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.30095938s STEP: Saw pod success Jul 20 21:17:53.347: INFO: Pod "pod-1f22648f-863e-4263-a091-a05f4ad88632" satisfied condition "success or failure" Jul 20 21:17:53.349: INFO: Trying to get logs from node jerma-worker pod pod-1f22648f-863e-4263-a091-a05f4ad88632 container test-container: STEP: delete the pod Jul 20 21:17:53.388: INFO: Waiting for pod pod-1f22648f-863e-4263-a091-a05f4ad88632 to disappear Jul 20 21:17:53.399: INFO: Pod pod-1f22648f-863e-4263-a091-a05f4ad88632 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:53.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8173" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:53.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:17:57.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4308" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":85,"skipped":1074,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:17:57.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1d9d95dd-e43e-4f3f-9f0d-d69c31170052 STEP: Creating a pod to test consume configMaps Jul 20 21:17:57.872: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42" in namespace "projected-324" to be "success or failure" Jul 20 21:17:57.959: INFO: Pod "pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 86.941043ms Jul 20 21:17:59.963: INFO: Pod "pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091223426s Jul 20 21:18:01.967: INFO: Pod "pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095230766s STEP: Saw pod success Jul 20 21:18:01.967: INFO: Pod "pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42" satisfied condition "success or failure" Jul 20 21:18:01.970: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42 container projected-configmap-volume-test: STEP: delete the pod Jul 20 21:18:01.990: INFO: Waiting for pod pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42 to disappear Jul 20 21:18:01.994: INFO: Pod pod-projected-configmaps-a80ce468-0115-44a1-bce9-7bf8d5bf8f42 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:01.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-324" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1079,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:02.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 20 21:18:06.638: INFO: Successfully updated pod "labelsupdate2e9f787c-8342-42cf-bf54-9a76d64dbd98" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:10.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7677" for this suite. • [SLOW TEST:8.685 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:10.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Jul 20 21:18:10.761: INFO: Waiting up to 5m0s for pod "downward-api-ba73b026-98b5-405a-bf57-dd73462e074f" in namespace "downward-api-6760" to be "success or failure" Jul 20 21:18:10.796: INFO: Pod "downward-api-ba73b026-98b5-405a-bf57-dd73462e074f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.536015ms Jul 20 21:18:12.800: INFO: Pod "downward-api-ba73b026-98b5-405a-bf57-dd73462e074f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038737199s Jul 20 21:18:14.845: INFO: Pod "downward-api-ba73b026-98b5-405a-bf57-dd73462e074f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083904758s STEP: Saw pod success Jul 20 21:18:14.845: INFO: Pod "downward-api-ba73b026-98b5-405a-bf57-dd73462e074f" satisfied condition "success or failure" Jul 20 21:18:14.848: INFO: Trying to get logs from node jerma-worker pod downward-api-ba73b026-98b5-405a-bf57-dd73462e074f container dapi-container: STEP: delete the pod Jul 20 21:18:14.935: INFO: Waiting for pod downward-api-ba73b026-98b5-405a-bf57-dd73462e074f to disappear Jul 20 21:18:14.977: INFO: Pod downward-api-ba73b026-98b5-405a-bf57-dd73462e074f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:14.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6760" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1116,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:14.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d0570591-d878-47cf-8ea4-6b9e23cbc4f3 STEP: Creating a pod to test consume secrets Jul 20 21:18:15.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3" in namespace "projected-1775" to be "success or failure" Jul 20 21:18:15.143: INFO: Pod "pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609798ms Jul 20 21:18:17.146: INFO: Pod "pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013863365s Jul 20 21:18:19.150: INFO: Pod "pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01829039s STEP: Saw pod success Jul 20 21:18:19.150: INFO: Pod "pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3" satisfied condition "success or failure" Jul 20 21:18:19.154: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3 container projected-secret-volume-test: STEP: delete the pod Jul 20 21:18:19.180: INFO: Waiting for pod pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3 to disappear Jul 20 21:18:19.234: INFO: Pod pod-projected-secrets-8d383dcc-98f7-444d-aa17-ad1a8e7b67d3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:19.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1775" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1118,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:19.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:18:19.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2" in namespace "downward-api-2514" to be "success or failure" Jul 20 21:18:19.317: INFO: Pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.790676ms Jul 20 21:18:21.378: INFO: Pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083405599s Jul 20 21:18:23.383: INFO: Pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2": Phase="Running", Reason="", readiness=true. Elapsed: 4.08777833s Jul 20 21:18:25.387: INFO: Pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091931224s STEP: Saw pod success Jul 20 21:18:25.387: INFO: Pod "downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2" satisfied condition "success or failure" Jul 20 21:18:25.389: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2 container client-container: STEP: delete the pod Jul 20 21:18:25.403: INFO: Waiting for pod downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2 to disappear Jul 20 21:18:25.419: INFO: Pod downwardapi-volume-c017589a-3505-4549-a13c-46d387bb7cd2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:25.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2514" for this suite. • [SLOW TEST:6.179 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1137,"failed":0} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:25.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:18:25.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38" in namespace "downward-api-4392" to be "success or failure" Jul 20 21:18:25.516: INFO: Pod "downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38": Phase="Pending", Reason="", readiness=false. Elapsed: 45.634715ms Jul 20 21:18:27.519: INFO: Pod "downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049160077s Jul 20 21:18:29.529: INFO: Pod "downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05874771s STEP: Saw pod success Jul 20 21:18:29.529: INFO: Pod "downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38" satisfied condition "success or failure" Jul 20 21:18:29.531: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38 container client-container: STEP: delete the pod Jul 20 21:18:29.553: INFO: Waiting for pod downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38 to disappear Jul 20 21:18:29.564: INFO: Pod downwardapi-volume-1eb8aca4-c028-4809-8f2d-4e62ab878b38 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:29.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4392" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1137,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:29.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Jul 20 21:18:30.279: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:30.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7021" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":92,"skipped":1149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:30.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-25ac47b8-921b-489d-9a9d-e787cd1a1e0f STEP: Creating a pod to test consume secrets Jul 20 21:18:30.528: INFO: Waiting up to 5m0s for pod "pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215" in namespace "secrets-8977" to be "success or failure" Jul 20 21:18:30.582: INFO: Pod "pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215": Phase="Pending", Reason="", readiness=false. Elapsed: 54.279279ms Jul 20 21:18:32.586: INFO: Pod "pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058153363s Jul 20 21:18:34.606: INFO: Pod "pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077741126s STEP: Saw pod success Jul 20 21:18:34.606: INFO: Pod "pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215" satisfied condition "success or failure" Jul 20 21:18:34.609: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215 container secret-volume-test: STEP: delete the pod Jul 20 21:18:34.635: INFO: Waiting for pod pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215 to disappear Jul 20 21:18:34.658: INFO: Pod pod-secrets-a13178ab-eee3-4b66-b580-b5a0bb12e215 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:34.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8977" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1176,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:34.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:18:35.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961" in namespace "projected-8301" to be "success or failure" Jul 20 21:18:35.054: INFO: Pod "downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961": Phase="Pending", Reason="", readiness=false. Elapsed: 22.45694ms Jul 20 21:18:37.057: INFO: Pod "downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025874356s Jul 20 21:18:39.062: INFO: Pod "downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030406831s STEP: Saw pod success Jul 20 21:18:39.062: INFO: Pod "downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961" satisfied condition "success or failure" Jul 20 21:18:39.065: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961 container client-container: STEP: delete the pod Jul 20 21:18:39.079: INFO: Waiting for pod downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961 to disappear Jul 20 21:18:39.145: INFO: Pod downwardapi-volume-a15728ef-74f1-4ccf-9294-caedcfbd6961 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:39.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8301" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1181,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:39.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-5e41a589-291b-4f48-97aa-2326a5293676 STEP: Creating a pod to test consume secrets Jul 20 21:18:39.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f" in namespace "projected-5528" to be "success or failure" Jul 20 21:18:39.325: INFO: Pod "pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.973559ms Jul 20 21:18:41.328: INFO: Pod "pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014802897s Jul 20 21:18:43.343: INFO: Pod "pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030218178s STEP: Saw pod success Jul 20 21:18:43.343: INFO: Pod "pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f" satisfied condition "success or failure" Jul 20 21:18:43.346: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f container projected-secret-volume-test: STEP: delete the pod Jul 20 21:18:43.413: INFO: Waiting for pod pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f to disappear Jul 20 21:18:43.420: INFO: Pod pod-projected-secrets-b3365aba-e99a-4a39-a2fc-1017cbcf143f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5528" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1188,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:43.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Jul 20 21:18:47.999: INFO: Successfully updated pod "labelsupdatee9c25426-16e0-4845-be11-3c30888110f1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:50.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7865" for this suite. • [SLOW TEST:6.615 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1191,"failed":0} [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:50.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Jul 20 21:18:50.084: INFO: namespace kubectl-2575 Jul 20 21:18:50.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2575' Jul 20 21:18:50.490: INFO: stderr: "" Jul 20 21:18:50.490: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 20 21:18:51.493: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 21:18:51.493: INFO: Found 0 / 1 Jul 20 21:18:52.494: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 21:18:52.494: INFO: Found 0 / 1 Jul 20 21:18:53.494: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 21:18:53.494: INFO: Found 0 / 1 Jul 20 21:18:54.494: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 21:18:54.494: INFO: Found 1 / 1 Jul 20 21:18:54.494: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 20 21:18:54.498: INFO: Selector matched 1 pods for map[app:agnhost] Jul 20 21:18:54.498: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 20 21:18:54.498: INFO: wait on agnhost-master startup in kubectl-2575 Jul 20 21:18:54.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-2zr2k agnhost-master --namespace=kubectl-2575' Jul 20 21:18:54.615: INFO: stderr: "" Jul 20 21:18:54.615: INFO: stdout: "Paused\n" STEP: exposing RC Jul 20 21:18:54.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2575' Jul 20 21:18:54.756: INFO: stderr: "" Jul 20 21:18:54.756: INFO: stdout: "service/rm2 exposed\n" Jul 20 21:18:54.797: INFO: Service rm2 in namespace kubectl-2575 found. STEP: exposing service Jul 20 21:18:56.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2575' Jul 20 21:18:56.929: INFO: stderr: "" Jul 20 21:18:56.929: INFO: stdout: "service/rm3 exposed\n" Jul 20 21:18:56.946: INFO: Service rm3 in namespace kubectl-2575 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:58.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2575" for this suite. • [SLOW TEST:8.919 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":97,"skipped":1191,"failed":0} [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:58.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 20 21:18:59.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5660' Jul 20 21:18:59.189: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 20 21:18:59.189: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Jul 20 21:18:59.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5660' Jul 20 21:18:59.322: INFO: stderr: "" Jul 20 21:18:59.322: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:18:59.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5660" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":98,"skipped":1191,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:18:59.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2915 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2915 STEP: Deleting pre-stop pod Jul 20 21:19:14.466: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:19:14.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2915" for this suite. • [SLOW TEST:15.198 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":99,"skipped":1198,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:19:14.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-pd2z STEP: Creating a pod to test atomic-volume-subpath Jul 20 21:19:15.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pd2z" in namespace "subpath-1438" to be "success or failure" Jul 20 21:19:15.091: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.450312ms Jul 20 21:19:17.095: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0183593s Jul 20 21:19:19.099: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 4.021893041s Jul 20 21:19:21.102: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 6.025619602s Jul 20 21:19:23.105: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 8.028359401s Jul 20 21:19:25.109: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 10.031978926s Jul 20 21:19:27.113: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 12.036201564s Jul 20 21:19:29.122: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 14.044917547s Jul 20 21:19:31.126: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 16.049069243s Jul 20 21:19:33.129: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 18.052668136s Jul 20 21:19:35.133: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 20.056653101s Jul 20 21:19:37.137: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 22.060756437s Jul 20 21:19:39.141: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Running", Reason="", readiness=true. Elapsed: 24.064809298s Jul 20 21:19:41.144: INFO: Pod "pod-subpath-test-configmap-pd2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.067226055s STEP: Saw pod success Jul 20 21:19:41.144: INFO: Pod "pod-subpath-test-configmap-pd2z" satisfied condition "success or failure" Jul 20 21:19:41.146: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-pd2z container test-container-subpath-configmap-pd2z: STEP: delete the pod Jul 20 21:19:41.178: INFO: Waiting for pod pod-subpath-test-configmap-pd2z to disappear Jul 20 21:19:41.182: INFO: Pod pod-subpath-test-configmap-pd2z no longer exists STEP: Deleting pod pod-subpath-test-configmap-pd2z Jul 20 21:19:41.182: INFO: Deleting pod "pod-subpath-test-configmap-pd2z" in namespace "subpath-1438" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:19:41.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1438" for this suite. • [SLOW TEST:26.661 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":100,"skipped":1207,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:19:41.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Jul 20 21:19:41.244: INFO: PodSpec: initContainers in spec.initContainers Jul 20 21:20:31.711: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b80528d9-5787-41c7-9c58-917e69807380", GenerateName:"", Namespace:"init-container-762", SelfLink:"/api/v1/namespaces/init-container-762/pods/pod-init-b80528d9-5787-41c7-9c58-917e69807380", UID:"ab3aaed6-4512-482d-8db8-1ff650423f12", ResourceVersion:"2868514", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63730876781, loc:(*time.Location)(0x78f7140)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"244495381"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tpxjw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00294e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tpxjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tpxjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tpxjw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0051ac068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00231a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051ac0f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051ac110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0051ac118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0051ac11c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876781, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876781, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876781, loc:(*time.Location)(0x78f7140)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730876781, loc:(*time.Location)(0x78f7140)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.2.95", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.95"}}, StartTime:(*v1.Time)(0xc001f94040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002740070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://0a518c591bce6b984b3b0b47ab3c15409c22a3d9bca1137920aabe0623c09a57", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f94080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f94060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0051ac19f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:31.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-762" for this suite. • [SLOW TEST:50.531 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":101,"skipped":1209,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:31.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Jul 20 21:20:32.299: INFO: created pod pod-service-account-defaultsa Jul 20 21:20:32.299: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 20 21:20:32.309: INFO: created pod pod-service-account-mountsa Jul 20 21:20:32.309: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 20 21:20:32.315: INFO: created pod pod-service-account-nomountsa Jul 20 21:20:32.315: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 20 21:20:32.339: INFO: created pod pod-service-account-defaultsa-mountspec Jul 20 21:20:32.339: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 20 21:20:32.381: INFO: created pod pod-service-account-mountsa-mountspec Jul 20 21:20:32.381: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 20 21:20:32.428: INFO: created pod pod-service-account-nomountsa-mountspec Jul 20 21:20:32.428: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 20 21:20:32.478: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 20 21:20:32.478: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 20 21:20:32.512: INFO: created pod pod-service-account-mountsa-nomountspec Jul 20 21:20:32.512: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 20 21:20:32.566: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 20 21:20:32.566: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:32.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-49" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":102,"skipped":1215,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:32.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:39.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-972" for this suite. • [SLOW TEST:8.639 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":103,"skipped":1233,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:41.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Jul 20 21:20:42.887: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix143189060/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:42.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3702" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":104,"skipped":1249,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:43.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:55.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4426" for this suite. • [SLOW TEST:11.788 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":105,"skipped":1264,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:55.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:20:55.322: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:20:59.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1493" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1274,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:20:59.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:20:59.592: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19" in namespace "downward-api-5948" to be "success or failure" Jul 20 21:20:59.623: INFO: Pod "downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19": Phase="Pending", Reason="", readiness=false. Elapsed: 30.892897ms Jul 20 21:21:01.627: INFO: Pod "downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035057715s Jul 20 21:21:03.632: INFO: Pod "downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039347798s STEP: Saw pod success Jul 20 21:21:03.632: INFO: Pod "downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19" satisfied condition "success or failure" Jul 20 21:21:03.635: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19 container client-container: STEP: delete the pod Jul 20 21:21:03.717: INFO: Waiting for pod downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19 to disappear Jul 20 21:21:03.723: INFO: Pod downwardapi-volume-28c85985-daa6-4f2e-83c1-320546b95d19 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:21:03.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5948" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:21:03.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 20 21:21:03.787: INFO: Waiting up to 5m0s for pod "pod-104c6bef-0c75-40a5-b587-27b62ce3c173" in namespace "emptydir-7683" to be "success or failure" Jul 20 21:21:03.789: INFO: Pod "pod-104c6bef-0c75-40a5-b587-27b62ce3c173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344301ms Jul 20 21:21:05.793: INFO: Pod "pod-104c6bef-0c75-40a5-b587-27b62ce3c173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006054721s Jul 20 21:21:07.797: INFO: Pod "pod-104c6bef-0c75-40a5-b587-27b62ce3c173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009942628s STEP: Saw pod success Jul 20 21:21:07.797: INFO: Pod "pod-104c6bef-0c75-40a5-b587-27b62ce3c173" satisfied condition "success or failure" Jul 20 21:21:07.799: INFO: Trying to get logs from node jerma-worker2 pod pod-104c6bef-0c75-40a5-b587-27b62ce3c173 container test-container: STEP: delete the pod Jul 20 21:21:07.836: INFO: Waiting for pod pod-104c6bef-0c75-40a5-b587-27b62ce3c173 to disappear Jul 20 21:21:07.843: INFO: Pod pod-104c6bef-0c75-40a5-b587-27b62ce3c173 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:21:07.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7683" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1339,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:21:07.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Jul 20 21:21:07.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27" in namespace "projected-5100" to be "success or failure" Jul 20 21:21:08.087: INFO: Pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27": Phase="Pending", Reason="", readiness=false. Elapsed: 127.476851ms Jul 20 21:21:10.090: INFO: Pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130906305s Jul 20 21:21:12.095: INFO: Pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27": Phase="Running", Reason="", readiness=true. Elapsed: 4.135209802s Jul 20 21:21:14.099: INFO: Pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139355091s STEP: Saw pod success Jul 20 21:21:14.099: INFO: Pod "downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27" satisfied condition "success or failure" Jul 20 21:21:14.102: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27 container client-container: STEP: delete the pod Jul 20 21:21:14.127: INFO: Waiting for pod downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27 to disappear Jul 20 21:21:14.129: INFO: Pod downwardapi-volume-8ca72ad1-1624-410b-8658-71a276d75a27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Jul 20 21:21:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5100" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Jul 20 21:21:14.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Jul 20 21:21:14.263: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 21:21:14.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5805'
Jul 20 21:21:14.589: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 20 21:21:14.590: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495
Jul 20 21:21:16.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5805'
Jul 20 21:21:16.764: INFO: stderr: ""
Jul 20 21:21:16.764: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:21:16.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5805" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":111,"skipped":1405,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:21:16.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul 20 21:21:16.893: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 21:21:19.847: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:21:29.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2284" for this suite.

• [SLOW TEST:12.432 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":112,"skipped":1421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:21:29.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 20 21:21:34.404: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:21:35.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8190" for this suite.

• [SLOW TEST:6.220 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":113,"skipped":1446,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:21:35.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-da06c8cb-315b-4686-a324-5f7ae79490fc
STEP: Creating a pod to test consume configMaps
Jul 20 21:21:36.062: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3" in namespace "configmap-4491" to be "success or failure"
Jul 20 21:21:36.065: INFO: Pod "pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117815ms
Jul 20 21:21:38.111: INFO: Pod "pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049443033s
Jul 20 21:21:40.115: INFO: Pod "pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053546128s
STEP: Saw pod success
Jul 20 21:21:40.115: INFO: Pod "pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3" satisfied condition "success or failure"
Jul 20 21:21:40.118: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3 container configmap-volume-test: 
STEP: delete the pod
Jul 20 21:21:40.153: INFO: Waiting for pod pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3 to disappear
Jul 20 21:21:40.172: INFO: Pod pod-configmaps-7ed46b2c-72c8-4661-9726-8aa46a5979f3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:21:40.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4491" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1452,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:21:40.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1287
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1287
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1287
Jul 20 21:21:40.333: INFO: Found 0 stateful pods, waiting for 1
Jul 20 21:21:50.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 20 21:21:50.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:21:50.652: INFO: stderr: "I0720 21:21:50.512650    1595 log.go:172] (0xc000a22160) (0xc0009e80a0) Create stream\nI0720 21:21:50.512811    1595 log.go:172] (0xc000a22160) (0xc0009e80a0) Stream added, broadcasting: 1\nI0720 21:21:50.517306    1595 log.go:172] (0xc000a22160) Reply frame received for 1\nI0720 21:21:50.517344    1595 log.go:172] (0xc000a22160) (0xc000541cc0) Create stream\nI0720 21:21:50.517356    1595 log.go:172] (0xc000a22160) (0xc000541cc0) Stream added, broadcasting: 3\nI0720 21:21:50.518277    1595 log.go:172] (0xc000a22160) Reply frame received for 3\nI0720 21:21:50.518306    1595 log.go:172] (0xc000a22160) (0xc0007a32c0) Create stream\nI0720 21:21:50.518315    1595 log.go:172] (0xc000a22160) (0xc0007a32c0) Stream added, broadcasting: 5\nI0720 21:21:50.519125    1595 log.go:172] (0xc000a22160) Reply frame received for 5\nI0720 21:21:50.616472    1595 log.go:172] (0xc000a22160) Data frame received for 5\nI0720 21:21:50.616503    1595 log.go:172] (0xc0007a32c0) (5) Data frame handling\nI0720 21:21:50.616525    1595 log.go:172] (0xc0007a32c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:21:50.644497    1595 log.go:172] (0xc000a22160) Data frame received for 3\nI0720 21:21:50.644537    1595 log.go:172] (0xc000541cc0) (3) Data frame handling\nI0720 21:21:50.644591    1595 log.go:172] (0xc000541cc0) (3) Data frame sent\nI0720 21:21:50.644955    1595 log.go:172] (0xc000a22160) Data frame received for 5\nI0720 21:21:50.644986    1595 log.go:172] (0xc0007a32c0) (5) Data frame handling\nI0720 21:21:50.645138    1595 log.go:172] (0xc000a22160) Data frame received for 3\nI0720 21:21:50.645159    1595 log.go:172] (0xc000541cc0) (3) Data frame handling\nI0720 21:21:50.646695    1595 log.go:172] (0xc000a22160) Data frame received for 1\nI0720 21:21:50.646721    1595 log.go:172] (0xc0009e80a0) (1) Data frame handling\nI0720 21:21:50.646753    1595 log.go:172] (0xc0009e80a0) (1) Data frame sent\nI0720 21:21:50.646839    1595 log.go:172] (0xc000a22160) (0xc0009e80a0) Stream removed, broadcasting: 1\nI0720 21:21:50.646900    1595 log.go:172] (0xc000a22160) Go away received\nI0720 21:21:50.647213    1595 log.go:172] (0xc000a22160) (0xc0009e80a0) Stream removed, broadcasting: 1\nI0720 21:21:50.647232    1595 log.go:172] (0xc000a22160) (0xc000541cc0) Stream removed, broadcasting: 3\nI0720 21:21:50.647242    1595 log.go:172] (0xc000a22160) (0xc0007a32c0) Stream removed, broadcasting: 5\n"
Jul 20 21:21:50.652: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:21:50.652: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 21:21:50.656: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 20 21:22:00.674: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 21:22:00.674: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:22:00.694: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999439s
Jul 20 21:22:01.698: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989805968s
Jul 20 21:22:02.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98589725s
Jul 20 21:22:03.729: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981711377s
Jul 20 21:22:04.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955221099s
Jul 20 21:22:05.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.951190557s
Jul 20 21:22:06.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.946737691s
Jul 20 21:22:07.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.942757139s
Jul 20 21:22:08.748: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.939002806s
Jul 20 21:22:09.755: INFO: Verifying statefulset ss doesn't scale past 1 for another 935.495452ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1287
Jul 20 21:22:10.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:22:10.999: INFO: stderr: "I0720 21:22:10.909500    1618 log.go:172] (0xc000119290) (0xc000972000) Create stream\nI0720 21:22:10.909557    1618 log.go:172] (0xc000119290) (0xc000972000) Stream added, broadcasting: 1\nI0720 21:22:10.911638    1618 log.go:172] (0xc000119290) Reply frame received for 1\nI0720 21:22:10.911691    1618 log.go:172] (0xc000119290) (0xc000665a40) Create stream\nI0720 21:22:10.911704    1618 log.go:172] (0xc000119290) (0xc000665a40) Stream added, broadcasting: 3\nI0720 21:22:10.912528    1618 log.go:172] (0xc000119290) Reply frame received for 3\nI0720 21:22:10.912580    1618 log.go:172] (0xc000119290) (0xc0009720a0) Create stream\nI0720 21:22:10.912604    1618 log.go:172] (0xc000119290) (0xc0009720a0) Stream added, broadcasting: 5\nI0720 21:22:10.913555    1618 log.go:172] (0xc000119290) Reply frame received for 5\nI0720 21:22:10.991702    1618 log.go:172] (0xc000119290) Data frame received for 3\nI0720 21:22:10.991739    1618 log.go:172] (0xc000665a40) (3) Data frame handling\nI0720 21:22:10.991755    1618 log.go:172] (0xc000665a40) (3) Data frame sent\nI0720 21:22:10.991767    1618 log.go:172] (0xc000119290) Data frame received for 3\nI0720 21:22:10.991778    1618 log.go:172] (0xc000665a40) (3) Data frame handling\nI0720 21:22:10.991793    1618 log.go:172] (0xc000119290) Data frame received for 5\nI0720 21:22:10.991803    1618 log.go:172] (0xc0009720a0) (5) Data frame handling\nI0720 21:22:10.991827    1618 log.go:172] (0xc0009720a0) (5) Data frame sent\nI0720 21:22:10.991838    1618 log.go:172] (0xc000119290) Data frame received for 5\nI0720 21:22:10.991848    1618 log.go:172] (0xc0009720a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:22:10.993785    1618 log.go:172] (0xc000119290) Data frame received for 1\nI0720 21:22:10.993814    1618 log.go:172] (0xc000972000) (1) Data frame handling\nI0720 21:22:10.993836    1618 log.go:172] (0xc000972000) (1) Data frame sent\nI0720 21:22:10.993851    1618 log.go:172] (0xc000119290) (0xc000972000) Stream removed, broadcasting: 1\nI0720 21:22:10.993875    1618 log.go:172] (0xc000119290) Go away received\nI0720 21:22:10.994257    1618 log.go:172] (0xc000119290) (0xc000972000) Stream removed, broadcasting: 1\nI0720 21:22:10.994281    1618 log.go:172] (0xc000119290) (0xc000665a40) Stream removed, broadcasting: 3\nI0720 21:22:10.994294    1618 log.go:172] (0xc000119290) (0xc0009720a0) Stream removed, broadcasting: 5\n"
Jul 20 21:22:11.000: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:22:11.000: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 21:22:11.004: INFO: Found 1 stateful pods, waiting for 3
Jul 20 21:22:21.008: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:22:21.008: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:22:21.008: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 20 21:22:21.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:22:21.241: INFO: stderr: "I0720 21:22:21.146179    1639 log.go:172] (0xc000a94000) (0xc0005fc640) Create stream\nI0720 21:22:21.146243    1639 log.go:172] (0xc000a94000) (0xc0005fc640) Stream added, broadcasting: 1\nI0720 21:22:21.149220    1639 log.go:172] (0xc000a94000) Reply frame received for 1\nI0720 21:22:21.149277    1639 log.go:172] (0xc000a94000) (0xc0001a7400) Create stream\nI0720 21:22:21.149294    1639 log.go:172] (0xc000a94000) (0xc0001a7400) Stream added, broadcasting: 3\nI0720 21:22:21.150275    1639 log.go:172] (0xc000a94000) Reply frame received for 3\nI0720 21:22:21.150320    1639 log.go:172] (0xc000a94000) (0xc0001a74a0) Create stream\nI0720 21:22:21.150348    1639 log.go:172] (0xc000a94000) (0xc0001a74a0) Stream added, broadcasting: 5\nI0720 21:22:21.151515    1639 log.go:172] (0xc000a94000) Reply frame received for 5\nI0720 21:22:21.233957    1639 log.go:172] (0xc000a94000) Data frame received for 3\nI0720 21:22:21.234014    1639 log.go:172] (0xc000a94000) Data frame received for 5\nI0720 21:22:21.234057    1639 log.go:172] (0xc0001a74a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:22:21.234093    1639 log.go:172] (0xc0001a7400) (3) Data frame handling\nI0720 21:22:21.234150    1639 log.go:172] (0xc0001a7400) (3) Data frame sent\nI0720 21:22:21.234167    1639 log.go:172] (0xc000a94000) Data frame received for 3\nI0720 21:22:21.234179    1639 log.go:172] (0xc0001a7400) (3) Data frame handling\nI0720 21:22:21.234214    1639 log.go:172] (0xc0001a74a0) (5) Data frame sent\nI0720 21:22:21.234264    1639 log.go:172] (0xc000a94000) Data frame received for 5\nI0720 21:22:21.234287    1639 log.go:172] (0xc0001a74a0) (5) Data frame handling\nI0720 21:22:21.235955    1639 log.go:172] (0xc000a94000) Data frame received for 1\nI0720 21:22:21.235978    1639 log.go:172] (0xc0005fc640) (1) Data frame handling\nI0720 21:22:21.235992    1639 log.go:172] (0xc0005fc640) (1) Data frame sent\nI0720 21:22:21.236011    1639 log.go:172] (0xc000a94000) (0xc0005fc640) Stream removed, broadcasting: 1\nI0720 21:22:21.236036    1639 log.go:172] (0xc000a94000) Go away received\nI0720 21:22:21.236407    1639 log.go:172] (0xc000a94000) (0xc0005fc640) Stream removed, broadcasting: 1\nI0720 21:22:21.236433    1639 log.go:172] (0xc000a94000) (0xc0001a7400) Stream removed, broadcasting: 3\nI0720 21:22:21.236445    1639 log.go:172] (0xc000a94000) (0xc0001a74a0) Stream removed, broadcasting: 5\n"
Jul 20 21:22:21.241: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:22:21.241: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 21:22:21.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:22:21.493: INFO: stderr: "I0720 21:22:21.387929    1661 log.go:172] (0xc000976000) (0xc0006d7ae0) Create stream\nI0720 21:22:21.387980    1661 log.go:172] (0xc000976000) (0xc0006d7ae0) Stream added, broadcasting: 1\nI0720 21:22:21.390320    1661 log.go:172] (0xc000976000) Reply frame received for 1\nI0720 21:22:21.390343    1661 log.go:172] (0xc000976000) (0xc0006d7cc0) Create stream\nI0720 21:22:21.390350    1661 log.go:172] (0xc000976000) (0xc0006d7cc0) Stream added, broadcasting: 3\nI0720 21:22:21.391186    1661 log.go:172] (0xc000976000) Reply frame received for 3\nI0720 21:22:21.391228    1661 log.go:172] (0xc000976000) (0xc000a14000) Create stream\nI0720 21:22:21.391242    1661 log.go:172] (0xc000976000) (0xc000a14000) Stream added, broadcasting: 5\nI0720 21:22:21.392134    1661 log.go:172] (0xc000976000) Reply frame received for 5\nI0720 21:22:21.449587    1661 log.go:172] (0xc000976000) Data frame received for 5\nI0720 21:22:21.449612    1661 log.go:172] (0xc000a14000) (5) Data frame handling\nI0720 21:22:21.449630    1661 log.go:172] (0xc000a14000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:22:21.485426    1661 log.go:172] (0xc000976000) Data frame received for 3\nI0720 21:22:21.485457    1661 log.go:172] (0xc0006d7cc0) (3) Data frame handling\nI0720 21:22:21.485476    1661 log.go:172] (0xc0006d7cc0) (3) Data frame sent\nI0720 21:22:21.485714    1661 log.go:172] (0xc000976000) Data frame received for 5\nI0720 21:22:21.485757    1661 log.go:172] (0xc000a14000) (5) Data frame handling\nI0720 21:22:21.485793    1661 log.go:172] (0xc000976000) Data frame received for 3\nI0720 21:22:21.485816    1661 log.go:172] (0xc0006d7cc0) (3) Data frame handling\nI0720 21:22:21.487887    1661 log.go:172] (0xc000976000) Data frame received for 1\nI0720 21:22:21.487918    1661 log.go:172] (0xc0006d7ae0) (1) Data frame handling\nI0720 21:22:21.487934    1661 log.go:172] (0xc0006d7ae0) (1) Data frame sent\nI0720 21:22:21.487948    1661 log.go:172] (0xc000976000) (0xc0006d7ae0) Stream removed, broadcasting: 1\nI0720 21:22:21.487961    1661 log.go:172] (0xc000976000) Go away received\nI0720 21:22:21.488379    1661 log.go:172] (0xc000976000) (0xc0006d7ae0) Stream removed, broadcasting: 1\nI0720 21:22:21.488399    1661 log.go:172] (0xc000976000) (0xc0006d7cc0) Stream removed, broadcasting: 3\nI0720 21:22:21.488411    1661 log.go:172] (0xc000976000) (0xc000a14000) Stream removed, broadcasting: 5\n"
Jul 20 21:22:21.493: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:22:21.493: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 21:22:21.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:22:21.760: INFO: stderr: "I0720 21:22:21.630364    1681 log.go:172] (0xc000a08000) (0xc000a3a000) Create stream\nI0720 21:22:21.630454    1681 log.go:172] (0xc000a08000) (0xc000a3a000) Stream added, broadcasting: 1\nI0720 21:22:21.634102    1681 log.go:172] (0xc000a08000) Reply frame received for 1\nI0720 21:22:21.634141    1681 log.go:172] (0xc000a08000) (0xc0004d3540) Create stream\nI0720 21:22:21.634151    1681 log.go:172] (0xc000a08000) (0xc0004d3540) Stream added, broadcasting: 3\nI0720 21:22:21.635133    1681 log.go:172] (0xc000a08000) Reply frame received for 3\nI0720 21:22:21.635170    1681 log.go:172] (0xc000a08000) (0xc000a3a0a0) Create stream\nI0720 21:22:21.635182    1681 log.go:172] (0xc000a08000) (0xc000a3a0a0) Stream added, broadcasting: 5\nI0720 21:22:21.636100    1681 log.go:172] (0xc000a08000) Reply frame received for 5\nI0720 21:22:21.707170    1681 log.go:172] (0xc000a08000) Data frame received for 5\nI0720 21:22:21.707196    1681 log.go:172] (0xc000a3a0a0) (5) Data frame handling\nI0720 21:22:21.707219    1681 log.go:172] (0xc000a3a0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:22:21.752601    1681 log.go:172] (0xc000a08000) Data frame received for 3\nI0720 21:22:21.752659    1681 log.go:172] (0xc0004d3540) (3) Data frame handling\nI0720 21:22:21.752678    1681 log.go:172] (0xc0004d3540) (3) Data frame sent\nI0720 21:22:21.752693    1681 log.go:172] (0xc000a08000) Data frame received for 3\nI0720 21:22:21.752704    1681 log.go:172] (0xc0004d3540) (3) Data frame handling\nI0720 21:22:21.752833    1681 log.go:172] (0xc000a08000) Data frame received for 5\nI0720 21:22:21.752860    1681 log.go:172] (0xc000a3a0a0) (5) Data frame handling\nI0720 21:22:21.754780    1681 log.go:172] (0xc000a08000) Data frame received for 1\nI0720 21:22:21.754829    1681 log.go:172] (0xc000a3a000) (1) Data frame handling\nI0720 21:22:21.754858    1681 log.go:172] (0xc000a3a000) (1) Data frame sent\nI0720 21:22:21.754879    1681 log.go:172] (0xc000a08000) (0xc000a3a000) Stream removed, broadcasting: 1\nI0720 21:22:21.754988    1681 log.go:172] (0xc000a08000) Go away received\nI0720 21:22:21.755412    1681 log.go:172] (0xc000a08000) (0xc000a3a000) Stream removed, broadcasting: 1\nI0720 21:22:21.755439    1681 log.go:172] (0xc000a08000) (0xc0004d3540) Stream removed, broadcasting: 3\nI0720 21:22:21.755456    1681 log.go:172] (0xc000a08000) (0xc000a3a0a0) Stream removed, broadcasting: 5\n"
Jul 20 21:22:21.760: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:22:21.761: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 21:22:21.761: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:22:21.765: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 20 21:22:31.772: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 21:22:31.773: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 21:22:31.773: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 20 21:22:31.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999591s
Jul 20 21:22:32.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.952790237s
Jul 20 21:22:33.835: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.948179817s
Jul 20 21:22:34.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.942942793s
Jul 20 21:22:35.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.938639531s
Jul 20 21:22:36.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933864548s
Jul 20 21:22:37.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929404965s
Jul 20 21:22:38.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.925080733s
Jul 20 21:22:39.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.920861322s
Jul 20 21:22:40.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 915.852972ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1287
Jul 20 21:22:41.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:22:42.119: INFO: stderr: "I0720 21:22:42.031088    1701 log.go:172] (0xc0000f5550) (0xc00060bcc0) Create stream\nI0720 21:22:42.031152    1701 log.go:172] (0xc0000f5550) (0xc00060bcc0) Stream added, broadcasting: 1\nI0720 21:22:42.033848    1701 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0720 21:22:42.033903    1701 log.go:172] (0xc0000f5550) (0xc00074e000) Create stream\nI0720 21:22:42.033940    1701 log.go:172] (0xc0000f5550) (0xc00074e000) Stream added, broadcasting: 3\nI0720 21:22:42.035272    1701 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0720 21:22:42.035312    1701 log.go:172] (0xc0000f5550) (0xc00060bd60) Create stream\nI0720 21:22:42.035323    1701 log.go:172] (0xc0000f5550) (0xc00060bd60) Stream added, broadcasting: 5\nI0720 21:22:42.036280    1701 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0720 21:22:42.112037    1701 log.go:172] (0xc0000f5550) Data frame received for 3\nI0720 21:22:42.112074    1701 log.go:172] (0xc00074e000) (3) Data frame handling\nI0720 21:22:42.112087    1701 log.go:172] (0xc00074e000) (3) Data frame sent\nI0720 21:22:42.112098    1701 log.go:172] (0xc0000f5550) Data frame received for 3\nI0720 21:22:42.112108    1701 log.go:172] (0xc00074e000) (3) Data frame handling\nI0720 21:22:42.112124    1701 log.go:172] (0xc0000f5550) Data frame received for 5\nI0720 21:22:42.112134    1701 log.go:172] (0xc00060bd60) (5) Data frame handling\nI0720 21:22:42.112141    1701 log.go:172] (0xc00060bd60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:22:42.112392    1701 log.go:172] (0xc0000f5550) Data frame received for 5\nI0720 21:22:42.112410    1701 log.go:172] (0xc00060bd60) (5) Data frame handling\nI0720 21:22:42.113967    1701 log.go:172] (0xc0000f5550) Data frame received for 1\nI0720 21:22:42.113984    1701 log.go:172] (0xc00060bcc0) (1) Data frame handling\nI0720 21:22:42.113993    1701 log.go:172] (0xc00060bcc0) (1) Data frame sent\nI0720 21:22:42.114215    1701 log.go:172] (0xc0000f5550) (0xc00060bcc0) Stream removed, broadcasting: 1\nI0720 21:22:42.114359    1701 log.go:172] (0xc0000f5550) Go away received\nI0720 21:22:42.114628    1701 log.go:172] (0xc0000f5550) (0xc00060bcc0) Stream removed, broadcasting: 1\nI0720 21:22:42.114650    1701 log.go:172] (0xc0000f5550) (0xc00074e000) Stream removed, broadcasting: 3\nI0720 21:22:42.114659    1701 log.go:172] (0xc0000f5550) (0xc00060bd60) Stream removed, broadcasting: 5\n"
Jul 20 21:22:42.119: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:22:42.119: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 21:22:42.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:22:42.309: INFO: stderr: "I0720 21:22:42.231397    1723 log.go:172] (0xc000bb8580) (0xc000a32320) Create stream\nI0720 21:22:42.231449    1723 log.go:172] (0xc000bb8580) (0xc000a32320) Stream added, broadcasting: 1\nI0720 21:22:42.235733    1723 log.go:172] (0xc000bb8580) Reply frame received for 1\nI0720 21:22:42.235771    1723 log.go:172] (0xc000bb8580) (0xc000b28000) Create stream\nI0720 21:22:42.235782    1723 log.go:172] (0xc000bb8580) (0xc000b28000) Stream added, broadcasting: 3\nI0720 21:22:42.236914    1723 log.go:172] (0xc000bb8580) Reply frame received for 3\nI0720 21:22:42.236981    1723 log.go:172] (0xc000bb8580) (0xc0006edb80) Create stream\nI0720 21:22:42.237021    1723 log.go:172] (0xc000bb8580) (0xc0006edb80) Stream added, broadcasting: 5\nI0720 21:22:42.237902    1723 log.go:172] (0xc000bb8580) Reply frame received for 5\nI0720 21:22:42.298122    1723 log.go:172] (0xc000bb8580) Data frame received for 3\nI0720 21:22:42.298171    1723 log.go:172] (0xc000b28000) (3) Data frame handling\nI0720 21:22:42.298185    1723 log.go:172] (0xc000b28000) (3) Data frame sent\nI0720 21:22:42.298196    1723 log.go:172] (0xc000bb8580) Data frame received for 3\nI0720 21:22:42.298206    1723 log.go:172] (0xc000b28000) (3) Data frame handling\nI0720 21:22:42.298239    1723 log.go:172] (0xc000bb8580) Data frame received for 5\nI0720 21:22:42.298269    1723 log.go:172] (0xc0006edb80) (5) Data frame handling\nI0720 21:22:42.298314    1723 log.go:172] (0xc0006edb80) (5) Data frame sent\nI0720 21:22:42.298347    1723 log.go:172] (0xc000bb8580) Data frame received for 5\nI0720 21:22:42.298358    1723 log.go:172] (0xc0006edb80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:22:42.303842    1723 log.go:172] (0xc000bb8580) Data frame received for 1\nI0720 21:22:42.303877    1723 log.go:172] (0xc000a32320) (1) Data frame handling\nI0720 21:22:42.303908    1723 log.go:172] (0xc000a32320) (1) Data frame sent\nI0720 21:22:42.304012    1723 log.go:172] (0xc000bb8580) (0xc000a32320) Stream removed, broadcasting: 1\nI0720 21:22:42.304464    1723 log.go:172] (0xc000bb8580) (0xc000a32320) Stream removed, broadcasting: 1\nI0720 21:22:42.304493    1723 log.go:172] (0xc000bb8580) (0xc000b28000) Stream removed, broadcasting: 3\nI0720 21:22:42.304657    1723 log.go:172] (0xc000bb8580) (0xc0006edb80) Stream removed, broadcasting: 5\n"
Jul 20 21:22:42.309: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:22:42.309: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 21:22:42.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1287 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:22:42.497: INFO: stderr: "I0720 21:22:42.430582    1743 log.go:172] (0xc0000f5290) (0xc000900000) Create stream\nI0720 21:22:42.430710    1743 log.go:172] (0xc0000f5290) (0xc000900000) Stream added, broadcasting: 1\nI0720 21:22:42.433265    1743 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0720 21:22:42.433324    1743 log.go:172] (0xc0000f5290) (0xc0006ebae0) Create stream\nI0720 21:22:42.433340    1743 log.go:172] (0xc0000f5290) (0xc0006ebae0) Stream added, broadcasting: 3\nI0720 21:22:42.434209    1743 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0720 21:22:42.434238    1743 log.go:172] (0xc0000f5290) (0xc0009000a0) Create stream\nI0720 21:22:42.434248    1743 log.go:172] (0xc0000f5290) (0xc0009000a0) Stream added, broadcasting: 5\nI0720 21:22:42.434962    1743 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0720 21:22:42.490035    1743 log.go:172] (0xc0000f5290) Data frame received for 3\nI0720 21:22:42.490077    1743 log.go:172] (0xc0006ebae0) (3) Data frame handling\nI0720 21:22:42.490093    1743 log.go:172] (0xc0006ebae0) (3) Data frame sent\nI0720 21:22:42.490103    1743 log.go:172] (0xc0000f5290) Data frame received for 3\nI0720 21:22:42.490110    1743 log.go:172] (0xc0006ebae0) (3) Data frame handling\nI0720 21:22:42.490133    1743 log.go:172] (0xc0000f5290) Data frame received for 5\nI0720 21:22:42.490146    1743 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0720 21:22:42.490159    1743 log.go:172] (0xc0009000a0) (5) Data frame sent\nI0720 21:22:42.490167    1743 log.go:172] (0xc0000f5290) Data frame received for 5\nI0720 21:22:42.490173    1743 log.go:172] (0xc0009000a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:22:42.491923    1743 log.go:172] (0xc0000f5290) Data frame received for 1\nI0720 21:22:42.491942    1743 log.go:172] (0xc000900000) (1) Data frame handling\nI0720 21:22:42.491953    1743 log.go:172] (0xc000900000) (1) Data frame sent\nI0720 21:22:42.491985    1743 log.go:172] (0xc0000f5290) (0xc000900000) Stream removed, broadcasting: 1\nI0720 21:22:42.492251    1743 log.go:172] (0xc0000f5290) Go away received\nI0720 21:22:42.492322    1743 log.go:172] (0xc0000f5290) (0xc000900000) Stream removed, broadcasting: 1\nI0720 21:22:42.492381    1743 log.go:172] (0xc0000f5290) (0xc0006ebae0) Stream removed, broadcasting: 3\nI0720 21:22:42.492400    1743 log.go:172] (0xc0000f5290) (0xc0009000a0) Stream removed, broadcasting: 5\n"
Jul 20 21:22:42.498: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:22:42.498: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 21:22:42.498: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 20 21:23:02.558: INFO: Deleting all statefulset in ns statefulset-1287
Jul 20 21:23:02.560: INFO: Scaling statefulset ss to 0
Jul 20 21:23:02.569: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:23:02.572: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:02.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1287" for this suite.

• [SLOW TEST:82.417 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":115,"skipped":1459,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:02.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 20 21:23:02.677: INFO: Waiting up to 5m0s for pod "pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9" in namespace "emptydir-3675" to be "success or failure"
Jul 20 21:23:02.684: INFO: Pod "pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458062ms
Jul 20 21:23:04.688: INFO: Pod "pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010372252s
Jul 20 21:23:06.691: INFO: Pod "pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013654461s
STEP: Saw pod success
Jul 20 21:23:06.691: INFO: Pod "pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9" satisfied condition "success or failure"
Jul 20 21:23:06.694: INFO: Trying to get logs from node jerma-worker2 pod pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9 container test-container: 
STEP: delete the pod
Jul 20 21:23:06.733: INFO: Waiting for pod pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9 to disappear
Jul 20 21:23:06.930: INFO: Pod pod-3c94712f-c631-4a17-ba75-d6056c5ce1c9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:06.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3675" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1463,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:06.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul 20 21:23:11.095: INFO: &Pod{ObjectMeta:{send-events-79d02e0f-33b6-4648-ace3-4d1d93bad8a1  events-2867 /api/v1/namespaces/events-2867/pods/send-events-79d02e0f-33b6-4648-ace3-4d1d93bad8a1 a501930f-a7f5-4795-81f7-032fe885e564 2869600 0 2020-07-20 21:23:07 +0000 UTC   map[name:foo time:74503271] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7zvvg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7zvvg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7zvvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:23:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:23:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:23:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.104,StartTime:2020-07-20 21:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 21:23:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://73461fb5d7bb350f818704aebcddb110eebd17c1129602e03daef87b3124d057,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul 20 21:23:13.100: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul 20 21:23:15.103: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:15.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2867" for this suite.

• [SLOW TEST:8.213 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":117,"skipped":1499,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:15.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul 20 21:23:15.190: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 21:23:15.257: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 21:23:15.260: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul 20 21:23:15.281: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:23:15.281: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 21:23:15.281: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:23:15.281: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 21:23:15.281: INFO: send-events-79d02e0f-33b6-4648-ace3-4d1d93bad8a1 from events-2867 started at 2020-07-20 21:23:07 +0000 UTC (1 container statuses recorded)
Jul 20 21:23:15.281: INFO: 	Container p ready: true, restart count 0
Jul 20 21:23:15.281: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul 20 21:23:15.285: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:23:15.285: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 21:23:15.285: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:23:15.285: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162392e2ae2fe914], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:16.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5667" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":118,"skipped":1525,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:16.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:16.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3368" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":119,"skipped":1526,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:16.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 20 21:23:16.547: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:16.552: INFO: Number of nodes with available pods: 0
Jul 20 21:23:16.552: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:17.557: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:17.559: INFO: Number of nodes with available pods: 0
Jul 20 21:23:17.559: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:18.671: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:18.709: INFO: Number of nodes with available pods: 0
Jul 20 21:23:18.709: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:19.785: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:19.788: INFO: Number of nodes with available pods: 0
Jul 20 21:23:19.788: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:20.562: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:20.565: INFO: Number of nodes with available pods: 1
Jul 20 21:23:20.565: INFO: Node jerma-worker2 is running more than one daemon pod
Jul 20 21:23:21.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:21.558: INFO: Number of nodes with available pods: 1
Jul 20 21:23:21.558: INFO: Node jerma-worker2 is running more than one daemon pod
Jul 20 21:23:22.557: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:22.561: INFO: Number of nodes with available pods: 2
Jul 20 21:23:22.561: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 20 21:23:22.578: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:22.598: INFO: Number of nodes with available pods: 1
Jul 20 21:23:22.598: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:23.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:23.605: INFO: Number of nodes with available pods: 1
Jul 20 21:23:23.605: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:24.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:24.607: INFO: Number of nodes with available pods: 1
Jul 20 21:23:24.607: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:25.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:25.605: INFO: Number of nodes with available pods: 1
Jul 20 21:23:25.605: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:26.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:26.606: INFO: Number of nodes with available pods: 1
Jul 20 21:23:26.606: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:27.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:27.606: INFO: Number of nodes with available pods: 1
Jul 20 21:23:27.606: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:28.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:28.606: INFO: Number of nodes with available pods: 1
Jul 20 21:23:28.606: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:29.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:29.606: INFO: Number of nodes with available pods: 1
Jul 20 21:23:29.606: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:23:30.602: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:23:30.628: INFO: Number of nodes with available pods: 2
Jul 20 21:23:30.628: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-135, will wait for the garbage collector to delete the pods
Jul 20 21:23:30.698: INFO: Deleting DaemonSet.extensions daemon-set took: 15.453996ms
Jul 20 21:23:30.999: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.208742ms
Jul 20 21:23:37.510: INFO: Number of nodes with available pods: 0
Jul 20 21:23:37.510: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 21:23:37.512: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-135/daemonsets","resourceVersion":"2869775"},"items":null}

Jul 20 21:23:37.562: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-135/pods","resourceVersion":"2869776"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:23:37.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-135" for this suite.

• [SLOW TEST:21.149 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":120,"skipped":1543,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:23:37.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-a5748425-b416-4b3f-ae18-d876d77e7db2 in namespace container-probe-2348
Jul 20 21:23:41.697: INFO: Started pod test-webserver-a5748425-b416-4b3f-ae18-d876d77e7db2 in namespace container-probe-2348
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 21:23:41.700: INFO: Initial restart count of pod test-webserver-a5748425-b416-4b3f-ae18-d876d77e7db2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:27:42.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2348" for this suite.

• [SLOW TEST:244.816 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1546,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:27:42.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:27:53.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5540" for this suite.

• [SLOW TEST:11.530 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":122,"skipped":1556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:27:53.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:27:54.055: INFO: Create a RollingUpdate DaemonSet
Jul 20 21:27:54.058: INFO: Check that daemon pods launch on every node of the cluster
Jul 20 21:27:54.060: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:27:54.065: INFO: Number of nodes with available pods: 0
Jul 20 21:27:54.065: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:27:55.070: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:27:55.073: INFO: Number of nodes with available pods: 0
Jul 20 21:27:55.073: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:27:56.472: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:27:56.476: INFO: Number of nodes with available pods: 0
Jul 20 21:27:56.476: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:27:57.070: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:27:57.072: INFO: Number of nodes with available pods: 0
Jul 20 21:27:57.073: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:27:58.073: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:27:58.077: INFO: Number of nodes with available pods: 2
Jul 20 21:27:58.077: INFO: Number of running nodes: 2, number of available pods: 2
Jul 20 21:27:58.077: INFO: Update the DaemonSet to trigger a rollout
Jul 20 21:27:58.081: INFO: Updating DaemonSet daemon-set
Jul 20 21:28:08.112: INFO: Roll back the DaemonSet before rollout is complete
Jul 20 21:28:08.118: INFO: Updating DaemonSet daemon-set
Jul 20 21:28:08.118: INFO: Make sure DaemonSet rollback is complete
Jul 20 21:28:08.140: INFO: Wrong image for pod: daemon-set-zk796. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 20 21:28:08.140: INFO: Pod daemon-set-zk796 is not available
Jul 20 21:28:08.147: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:28:09.151: INFO: Wrong image for pod: daemon-set-zk796. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 20 21:28:09.151: INFO: Pod daemon-set-zk796 is not available
Jul 20 21:28:09.155: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:28:10.195: INFO: Pod daemon-set-9kgqt is not available
Jul 20 21:28:10.198: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1043, will wait for the garbage collector to delete the pods
Jul 20 21:28:10.434: INFO: Deleting DaemonSet.extensions daemon-set took: 6.964803ms
Jul 20 21:28:10.834: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.28241ms
Jul 20 21:28:14.143: INFO: Number of nodes with available pods: 0
Jul 20 21:28:14.143: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 21:28:14.146: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1043/daemonsets","resourceVersion":"2870720"},"items":null}

Jul 20 21:28:14.148: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1043/pods","resourceVersion":"2870720"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:14.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1043" for this suite.

• [SLOW TEST:20.232 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":123,"skipped":1604,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:14.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul 20 21:28:14.221: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 21:28:14.257: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 21:28:14.260: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul 20 21:28:14.279: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:28:14.279: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 21:28:14.279: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:28:14.279: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 21:28:14.279: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul 20 21:28:14.298: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:28:14.298: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 21:28:14.298: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:28:14.298: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Jul 20 21:28:14.386: INFO: Pod kindnet-bqk7h requesting resource cpu=100m on Node jerma-worker
Jul 20 21:28:14.386: INFO: Pod kindnet-klj8h requesting resource cpu=100m on Node jerma-worker2
Jul 20 21:28:14.386: INFO: Pod kube-proxy-2ssxj requesting resource cpu=0m on Node jerma-worker
Jul 20 21:28:14.386: INFO: Pod kube-proxy-67jwf requesting resource cpu=0m on Node jerma-worker2
STEP: Starting Pods to consume most of the cluster CPU.
Jul 20 21:28:14.386: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker
Jul 20 21:28:14.393: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1.1623932851f1efc4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5919/filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1.162393289a8c3e50], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1.16239328f8077677], Reason = [Created], Message = [Created container filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1.162393290d28f299], Reason = [Started], Message = [Started container filler-pod-32649368-deea-4838-af9a-7e9fd89de3c1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575.1623932853343ab3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5919/filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575.16239328aca853e4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575.16239329122e8fb6], Reason = [Created], Message = [Created container filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575.16239329225591f9], Reason = [Started], Message = [Started container filler-pod-dcd52862-14d2-442c-afc4-0eeceb237575]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1623932942974350], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:19.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5919" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:5.398 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":124,"skipped":1615,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:19.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:28:20.493: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:28:22.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877300, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877300, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877300, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877300, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:28:25.624: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:28:25.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4860-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:26.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8207" for this suite.
STEP: Destroying namespace "webhook-8207-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.341 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":125,"skipped":1672,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:26.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:28:27.489: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:28:29.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:28:31.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877307, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:28:34.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:44.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8869" for this suite.
STEP: Destroying namespace "webhook-8869-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.001 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":126,"skipped":1682,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:44.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
STEP: creating the pod
Jul 20 21:28:44.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3416'
Jul 20 21:28:48.973: INFO: stderr: ""
Jul 20 21:28:48.973: INFO: stdout: "pod/pause created\n"
Jul 20 21:28:48.973: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 20 21:28:48.973: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3416" to be "running and ready"
Jul 20 21:28:49.010: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 36.697655ms
Jul 20 21:28:51.172: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19842958s
Jul 20 21:28:53.225: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.252217344s
Jul 20 21:28:53.226: INFO: Pod "pause" satisfied condition "running and ready"
Jul 20 21:28:53.226: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 20 21:28:53.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3416'
Jul 20 21:28:53.333: INFO: stderr: ""
Jul 20 21:28:53.333: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 20 21:28:53.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3416'
Jul 20 21:28:53.452: INFO: stderr: ""
Jul 20 21:28:53.452: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 20 21:28:53.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3416'
Jul 20 21:28:53.553: INFO: stderr: ""
Jul 20 21:28:53.553: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 20 21:28:53.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3416'
Jul 20 21:28:53.632: INFO: stderr: ""
Jul 20 21:28:53.632: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282
STEP: using delete to clean up resources
Jul 20 21:28:53.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3416'
Jul 20 21:28:53.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:28:53.755: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 20 21:28:53.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3416'
Jul 20 21:28:53.857: INFO: stderr: "No resources found in kubectl-3416 namespace.\n"
Jul 20 21:28:53.857: INFO: stdout: ""
Jul 20 21:28:53.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3416 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 21:28:53.957: INFO: stderr: ""
Jul 20 21:28:53.957: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:53.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3416" for this suite.

• [SLOW TEST:9.055 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":127,"skipped":1683,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:53.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Jul 20 21:28:54.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul 20 21:28:54.760: INFO: stderr: ""
Jul 20 21:28:54.760: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:28:54.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5384" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":128,"skipped":1739,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:28:54.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul 20 21:28:54.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6857'
Jul 20 21:28:55.080: INFO: stderr: ""
Jul 20 21:28:55.080: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:28:55.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:28:55.179: INFO: stderr: ""
Jul 20 21:28:55.179: INFO: stdout: "update-demo-nautilus-h858d update-demo-nautilus-nqk4l "
Jul 20 21:28:55.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h858d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:28:55.297: INFO: stderr: ""
Jul 20 21:28:55.297: INFO: stdout: ""
Jul 20 21:28:55.297: INFO: update-demo-nautilus-h858d is created but not running
Jul 20 21:29:00.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:00.399: INFO: stderr: ""
Jul 20 21:29:00.399: INFO: stdout: "update-demo-nautilus-h858d update-demo-nautilus-nqk4l "
Jul 20 21:29:00.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h858d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:00.499: INFO: stderr: ""
Jul 20 21:29:00.499: INFO: stdout: "true"
Jul 20 21:29:00.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h858d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:00.597: INFO: stderr: ""
Jul 20 21:29:00.597: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:00.597: INFO: validating pod update-demo-nautilus-h858d
Jul 20 21:29:00.602: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:00.602: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:00.602: INFO: update-demo-nautilus-h858d is verified up and running
Jul 20 21:29:00.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:00.701: INFO: stderr: ""
Jul 20 21:29:00.701: INFO: stdout: "true"
Jul 20 21:29:00.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:00.799: INFO: stderr: ""
Jul 20 21:29:00.800: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:00.800: INFO: validating pod update-demo-nautilus-nqk4l
Jul 20 21:29:00.804: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:00.804: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:00.804: INFO: update-demo-nautilus-nqk4l is verified up and running
STEP: scaling down the replication controller
Jul 20 21:29:00.806: INFO: scanned /root for discovery docs: 
Jul 20 21:29:00.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6857'
Jul 20 21:29:01.946: INFO: stderr: ""
Jul 20 21:29:01.946: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:29:01.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:02.049: INFO: stderr: ""
Jul 20 21:29:02.049: INFO: stdout: "update-demo-nautilus-h858d update-demo-nautilus-nqk4l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 20 21:29:07.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:07.154: INFO: stderr: ""
Jul 20 21:29:07.154: INFO: stdout: "update-demo-nautilus-h858d update-demo-nautilus-nqk4l "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 20 21:29:12.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:12.247: INFO: stderr: ""
Jul 20 21:29:12.247: INFO: stdout: "update-demo-nautilus-nqk4l "
Jul 20 21:29:12.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:12.343: INFO: stderr: ""
Jul 20 21:29:12.343: INFO: stdout: "true"
Jul 20 21:29:12.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:12.431: INFO: stderr: ""
Jul 20 21:29:12.431: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:12.431: INFO: validating pod update-demo-nautilus-nqk4l
Jul 20 21:29:12.434: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:12.434: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:12.434: INFO: update-demo-nautilus-nqk4l is verified up and running
STEP: scaling up the replication controller
Jul 20 21:29:12.438: INFO: scanned /root for discovery docs: 
Jul 20 21:29:12.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6857'
Jul 20 21:29:13.554: INFO: stderr: ""
Jul 20 21:29:13.554: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:29:13.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:13.650: INFO: stderr: ""
Jul 20 21:29:13.650: INFO: stdout: "update-demo-nautilus-nqk4l update-demo-nautilus-qtqct "
Jul 20 21:29:13.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:13.771: INFO: stderr: ""
Jul 20 21:29:13.771: INFO: stdout: "true"
Jul 20 21:29:13.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:13.864: INFO: stderr: ""
Jul 20 21:29:13.865: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:13.865: INFO: validating pod update-demo-nautilus-nqk4l
Jul 20 21:29:13.933: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:13.933: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:13.933: INFO: update-demo-nautilus-nqk4l is verified up and running
Jul 20 21:29:13.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtqct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:14.174: INFO: stderr: ""
Jul 20 21:29:14.174: INFO: stdout: ""
Jul 20 21:29:14.174: INFO: update-demo-nautilus-qtqct is created but not running
Jul 20 21:29:19.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6857'
Jul 20 21:29:19.283: INFO: stderr: ""
Jul 20 21:29:19.283: INFO: stdout: "update-demo-nautilus-nqk4l update-demo-nautilus-qtqct "
Jul 20 21:29:19.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:19.387: INFO: stderr: ""
Jul 20 21:29:19.387: INFO: stdout: "true"
Jul 20 21:29:19.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nqk4l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:19.472: INFO: stderr: ""
Jul 20 21:29:19.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:19.472: INFO: validating pod update-demo-nautilus-nqk4l
Jul 20 21:29:19.483: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:19.484: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:19.484: INFO: update-demo-nautilus-nqk4l is verified up and running
Jul 20 21:29:19.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtqct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:19.578: INFO: stderr: ""
Jul 20 21:29:19.578: INFO: stdout: "true"
Jul 20 21:29:19.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtqct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6857'
Jul 20 21:29:19.664: INFO: stderr: ""
Jul 20 21:29:19.664: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:19.664: INFO: validating pod update-demo-nautilus-qtqct
Jul 20 21:29:19.667: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:19.667: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:19.667: INFO: update-demo-nautilus-qtqct is verified up and running
STEP: using delete to clean up resources
Jul 20 21:29:19.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6857'
Jul 20 21:29:19.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:29:19.776: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 20 21:29:19.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6857'
Jul 20 21:29:19.919: INFO: stderr: "No resources found in kubectl-6857 namespace.\n"
Jul 20 21:29:19.920: INFO: stdout: ""
Jul 20 21:29:19.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6857 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 21:29:20.030: INFO: stderr: ""
Jul 20 21:29:20.030: INFO: stdout: "update-demo-nautilus-nqk4l\nupdate-demo-nautilus-qtqct\n"
Jul 20 21:29:20.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6857'
Jul 20 21:29:20.631: INFO: stderr: "No resources found in kubectl-6857 namespace.\n"
Jul 20 21:29:20.631: INFO: stdout: ""
Jul 20 21:29:20.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6857 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 21:29:20.730: INFO: stderr: ""
Jul 20 21:29:20.730: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:29:20.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6857" for this suite.

• [SLOW TEST:25.969 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":129,"skipped":1764,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:29:20.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4729.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4729.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4729.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4729.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4729.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4729.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 21:29:27.135: INFO: DNS probes using dns-4729/dns-test-253b9fed-0bd5-4b38-b956-7abe86d4f6c5 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:29:27.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4729" for this suite.

• [SLOW TEST:6.524 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":130,"skipped":1813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:29:27.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Jul 20 21:29:27.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4510'
Jul 20 21:29:27.892: INFO: stderr: ""
Jul 20 21:29:27.892: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:29:27.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4510'
Jul 20 21:29:28.052: INFO: stderr: ""
Jul 20 21:29:28.052: INFO: stdout: "update-demo-nautilus-9zlt6 update-demo-nautilus-qwnkh "
Jul 20 21:29:28.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zlt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4510'
Jul 20 21:29:28.166: INFO: stderr: ""
Jul 20 21:29:28.166: INFO: stdout: ""
Jul 20 21:29:28.166: INFO: update-demo-nautilus-9zlt6 is created but not running
Jul 20 21:29:33.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4510'
Jul 20 21:29:33.271: INFO: stderr: ""
Jul 20 21:29:33.271: INFO: stdout: "update-demo-nautilus-9zlt6 update-demo-nautilus-qwnkh "
Jul 20 21:29:33.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zlt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4510'
Jul 20 21:29:33.376: INFO: stderr: ""
Jul 20 21:29:33.376: INFO: stdout: "true"
Jul 20 21:29:33.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zlt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4510'
Jul 20 21:29:33.480: INFO: stderr: ""
Jul 20 21:29:33.480: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:33.480: INFO: validating pod update-demo-nautilus-9zlt6
Jul 20 21:29:33.484: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:33.484: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:33.484: INFO: update-demo-nautilus-9zlt6 is verified up and running
Jul 20 21:29:33.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwnkh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4510'
Jul 20 21:29:33.573: INFO: stderr: ""
Jul 20 21:29:33.573: INFO: stdout: "true"
Jul 20 21:29:33.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwnkh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4510'
Jul 20 21:29:33.652: INFO: stderr: ""
Jul 20 21:29:33.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:29:33.652: INFO: validating pod update-demo-nautilus-qwnkh
Jul 20 21:29:33.656: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:29:33.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:29:33.656: INFO: update-demo-nautilus-qwnkh is verified up and running
STEP: using delete to clean up resources
Jul 20 21:29:33.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4510'
Jul 20 21:29:33.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:29:33.758: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 20 21:29:33.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4510'
Jul 20 21:29:33.841: INFO: stderr: "No resources found in kubectl-4510 namespace.\n"
Jul 20 21:29:33.841: INFO: stdout: ""
Jul 20 21:29:33.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4510 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 20 21:29:33.940: INFO: stderr: ""
Jul 20 21:29:33.940: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:29:33.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4510" for this suite.

• [SLOW TEST:6.683 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":131,"skipped":1849,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:29:33.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 20 21:29:44.257: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:44.292: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:46.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:46.295: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:48.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:48.296: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:50.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:50.296: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:52.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:52.296: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:54.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:54.295: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:56.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:56.296: INFO: Pod pod-with-poststart-http-hook still exists
Jul 20 21:29:58.292: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 20 21:29:58.296: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:29:58.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7891" for this suite.

• [SLOW TEST:24.357 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":1882,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:29:58.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Jul 20 21:29:58.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:30:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6042" for this suite.

• [SLOW TEST:16.635 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":133,"skipped":1903,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:30:14.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 20 21:30:15.035: INFO: Waiting up to 5m0s for pod "pod-2a3c1bef-1703-4e48-9f6e-6a1274899863" in namespace "emptydir-9638" to be "success or failure"
Jul 20 21:30:15.062: INFO: Pod "pod-2a3c1bef-1703-4e48-9f6e-6a1274899863": Phase="Pending", Reason="", readiness=false. Elapsed: 26.797443ms
Jul 20 21:30:17.065: INFO: Pod "pod-2a3c1bef-1703-4e48-9f6e-6a1274899863": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030705734s
Jul 20 21:30:19.069: INFO: Pod "pod-2a3c1bef-1703-4e48-9f6e-6a1274899863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034258946s
STEP: Saw pod success
Jul 20 21:30:19.069: INFO: Pod "pod-2a3c1bef-1703-4e48-9f6e-6a1274899863" satisfied condition "success or failure"
Jul 20 21:30:19.072: INFO: Trying to get logs from node jerma-worker pod pod-2a3c1bef-1703-4e48-9f6e-6a1274899863 container test-container: 
STEP: delete the pod
Jul 20 21:30:19.192: INFO: Waiting for pod pod-2a3c1bef-1703-4e48-9f6e-6a1274899863 to disappear
Jul 20 21:30:19.200: INFO: Pod pod-2a3c1bef-1703-4e48-9f6e-6a1274899863 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:30:19.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9638" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":1907,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:30:19.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Jul 20 21:30:19.549: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul 20 21:30:19.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:19.950: INFO: stderr: ""
Jul 20 21:30:19.950: INFO: stdout: "service/agnhost-slave created\n"
Jul 20 21:30:19.950: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul 20 21:30:19.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:20.223: INFO: stderr: ""
Jul 20 21:30:20.223: INFO: stdout: "service/agnhost-master created\n"
Jul 20 21:30:20.223: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 20 21:30:20.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:20.508: INFO: stderr: ""
Jul 20 21:30:20.508: INFO: stdout: "service/frontend created\n"
Jul 20 21:30:20.509: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul 20 21:30:20.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:20.763: INFO: stderr: ""
Jul 20 21:30:20.763: INFO: stdout: "deployment.apps/frontend created\n"
Jul 20 21:30:20.764: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 20 21:30:20.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:21.056: INFO: stderr: ""
Jul 20 21:30:21.056: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul 20 21:30:21.057: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 20 21:30:21.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4617'
Jul 20 21:30:21.372: INFO: stderr: ""
Jul 20 21:30:21.372: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul 20 21:30:21.372: INFO: Waiting for all frontend pods to be Running.
Jul 20 21:30:31.423: INFO: Waiting for frontend to serve content.
Jul 20 21:30:31.433: INFO: Trying to add a new entry to the guestbook.
Jul 20 21:30:31.445: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 20 21:30:31.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:31.606: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:31.606: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 21:30:31.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:31.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:31.753: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 21:30:31.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:31.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:31.931: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 21:30:31.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:32.051: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:32.051: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 21:30:32.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:32.177: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:32.177: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 20 21:30:32.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4617'
Jul 20 21:30:32.296: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 20 21:30:32.296: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:30:32.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4617" for this suite.

• [SLOW TEST:13.105 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":135,"skipped":1908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:30:32.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vwlb2 in namespace proxy-5863
I0720 21:30:32.822805       6 runners.go:189] Created replication controller with name: proxy-service-vwlb2, namespace: proxy-5863, replica count: 1
I0720 21:30:33.873327       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:30:34.873564       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:30:35.873751       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:30:36.873986       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:30:37.874216       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:30:38.874499       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 21:30:39.874772       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 21:30:40.874986       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 21:30:41.875252       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0720 21:30:42.875474       6 runners.go:189] proxy-service-vwlb2 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 21:30:42.878: INFO: setup took 10.494581838s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 20 21:30:42.885: INFO: (0) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 6.301875ms)
Jul 20 21:30:42.885: INFO: (0) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 6.645105ms)
Jul 20 21:30:42.885: INFO: (0) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 6.971868ms)
Jul 20 21:30:42.886: INFO: (0) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 7.235274ms)
Jul 20 21:30:42.886: INFO: (0) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 7.514381ms)
Jul 20 21:30:42.886: INFO: (0) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 7.693354ms)
Jul 20 21:30:42.886: INFO: (0) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 8.167814ms)
Jul 20 21:30:42.887: INFO: (0) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 8.510717ms)
Jul 20 21:30:42.887: INFO: (0) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 9.139508ms)
Jul 20 21:30:42.887: INFO: (0) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 8.975894ms)
Jul 20 21:30:42.889: INFO: (0) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 11.031861ms)
Jul 20 21:30:42.892: INFO: (0) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 13.488172ms)
Jul 20 21:30:42.892: INFO: (0) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 13.490018ms)
Jul 20 21:30:42.892: INFO: (0) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test (200; 3.274983ms)
Jul 20 21:30:42.896: INFO: (1) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.575971ms)
Jul 20 21:30:42.896: INFO: (1) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.467349ms)
Jul 20 21:30:42.897: INFO: (1) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.758468ms)
Jul 20 21:30:42.897: INFO: (1) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 4.202438ms)
Jul 20 21:30:42.897: INFO: (1) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 4.342598ms)
Jul 20 21:30:42.897: INFO: (1) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.482569ms)
Jul 20 21:30:42.898: INFO: (1) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.699561ms)
Jul 20 21:30:42.898: INFO: (1) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.625149ms)
Jul 20 21:30:42.898: INFO: (1) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.6699ms)
Jul 20 21:30:42.898: INFO: (1) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.688271ms)
Jul 20 21:30:42.898: INFO: (1) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.667237ms)
Jul 20 21:30:42.900: INFO: (2) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 2.333244ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.007321ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 3.285866ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.364345ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.421183ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.460648ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.464344ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.491685ms)
Jul 20 21:30:42.901: INFO: (2) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.466084ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 3.901787ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 3.932741ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.342058ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.397225ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.398061ms)
Jul 20 21:30:42.902: INFO: (2) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.573441ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.50651ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.46725ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.49365ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.688749ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 3.791859ms)
Jul 20 21:30:42.906: INFO: (3) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test (200; 6.958012ms)
Jul 20 21:30:42.915: INFO: (4) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 6.929478ms)
Jul 20 21:30:42.916: INFO: (4) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 8.062057ms)
Jul 20 21:30:42.916: INFO: (4) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 8.210977ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 8.861052ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 8.803981ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 8.801765ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 8.807615ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 8.789915ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 8.858752ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 8.910868ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 8.834754ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 8.883217ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 8.957695ms)
Jul 20 21:30:42.917: INFO: (4) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 8.900183ms)
Jul 20 21:30:42.919: INFO: (5) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 2.236405ms)
Jul 20 21:30:42.919: INFO: (5) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.275263ms)
Jul 20 21:30:42.920: INFO: (5) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.635416ms)
Jul 20 21:30:42.920: INFO: (5) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.671926ms)
Jul 20 21:30:42.920: INFO: (5) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.710981ms)
Jul 20 21:30:42.920: INFO: (5) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 2.748567ms)
Jul 20 21:30:42.921: INFO: (5) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.568852ms)
Jul 20 21:30:42.921: INFO: (5) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 3.717122ms)
Jul 20 21:30:42.921: INFO: (5) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.744798ms)
Jul 20 21:30:42.921: INFO: (5) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.752312ms)
Jul 20 21:30:42.921: INFO: (5) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test (200; 4.01102ms)
Jul 20 21:30:42.923: INFO: (6) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 1.899119ms)
Jul 20 21:30:42.923: INFO: (6) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.288623ms)
Jul 20 21:30:42.924: INFO: (6) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.333463ms)
Jul 20 21:30:42.924: INFO: (6) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.327984ms)
Jul 20 21:30:42.924: INFO: (6) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.364036ms)
Jul 20 21:30:42.924: INFO: (6) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 3.411321ms)
Jul 20 21:30:42.925: INFO: (6) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.632461ms)
Jul 20 21:30:42.925: INFO: (6) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.778937ms)
Jul 20 21:30:42.925: INFO: (6) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 3.831878ms)
Jul 20 21:30:42.925: INFO: (6) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 3.796691ms)
Jul 20 21:30:42.925: INFO: (6) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.824119ms)
Jul 20 21:30:42.926: INFO: (6) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 5.410089ms)
Jul 20 21:30:42.927: INFO: (6) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 5.493575ms)
Jul 20 21:30:42.927: INFO: (6) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 5.614483ms)
Jul 20 21:30:42.927: INFO: (6) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.183355ms)
Jul 20 21:30:42.930: INFO: (7) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.243171ms)
Jul 20 21:30:42.930: INFO: (7) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 3.246589ms)
Jul 20 21:30:42.930: INFO: (7) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.286446ms)
Jul 20 21:30:42.930: INFO: (7) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.251809ms)
Jul 20 21:30:42.930: INFO: (7) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.565778ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.741352ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.886084ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.922332ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 3.927881ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.046375ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.061521ms)
Jul 20 21:30:42.931: INFO: (7) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.126119ms)
Jul 20 21:30:42.933: INFO: (8) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 1.742016ms)
Jul 20 21:30:42.934: INFO: (8) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 2.697224ms)
Jul 20 21:30:42.934: INFO: (8) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 3.059231ms)
Jul 20 21:30:42.934: INFO: (8) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.219328ms)
Jul 20 21:30:42.934: INFO: (8) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.20609ms)
Jul 20 21:30:42.934: INFO: (8) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.203696ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.437826ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.414331ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.480974ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.548657ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.543562ms)
Jul 20 21:30:42.936: INFO: (8) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.588245ms)
Jul 20 21:30:42.939: INFO: (9) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.834818ms)
Jul 20 21:30:42.939: INFO: (9) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.86553ms)
Jul 20 21:30:42.939: INFO: (9) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.88403ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.979635ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.929976ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.918533ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.99815ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 4.028742ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.94805ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 4.053733ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 4.254974ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.265937ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.592396ms)
Jul 20 21:30:42.940: INFO: (9) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.581247ms)
Jul 20 21:30:42.941: INFO: (9) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.77895ms)
Jul 20 21:30:42.944: INFO: (10) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 3.437919ms)
Jul 20 21:30:42.944: INFO: (10) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.513573ms)
Jul 20 21:30:42.944: INFO: (10) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.768847ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.854885ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.912356ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.027754ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 4.074063ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 4.121702ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.194045ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.573935ms)
Jul 20 21:30:42.945: INFO: (10) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.655903ms)
Jul 20 21:30:42.946: INFO: (10) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.768262ms)
Jul 20 21:30:42.948: INFO: (11) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.445259ms)
Jul 20 21:30:42.948: INFO: (11) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.502736ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.063814ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.103353ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 3.120883ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.371256ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 3.4957ms)
Jul 20 21:30:42.949: INFO: (11) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.529379ms)
Jul 20 21:30:42.950: INFO: (11) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.296377ms)
Jul 20 21:30:42.950: INFO: (11) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.324852ms)
Jul 20 21:30:42.950: INFO: (11) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.279663ms)
Jul 20 21:30:42.950: INFO: (11) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.321943ms)
Jul 20 21:30:42.950: INFO: (11) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.279455ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 2.864124ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.122187ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.194585ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.111989ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.244441ms)
Jul 20 21:30:42.953: INFO: (12) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.269129ms)
Jul 20 21:30:42.956: INFO: (12) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 5.628235ms)
Jul 20 21:30:42.956: INFO: (12) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 2.280499ms)
Jul 20 21:30:42.959: INFO: (13) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 2.293471ms)
Jul 20 21:30:42.963: INFO: (13) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 5.883815ms)
Jul 20 21:30:42.963: INFO: (13) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 5.91988ms)
Jul 20 21:30:42.964: INFO: (13) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 7.312187ms)
Jul 20 21:30:42.964: INFO: (13) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 7.309089ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 8.666574ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 9.159972ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 9.23548ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 9.319806ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 9.258274ms)
Jul 20 21:30:42.966: INFO: (13) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 9.308075ms)
Jul 20 21:30:42.969: INFO: (14) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 2.624167ms)
Jul 20 21:30:42.969: INFO: (14) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.731042ms)
Jul 20 21:30:42.970: INFO: (14) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.154555ms)
Jul 20 21:30:42.970: INFO: (14) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.641566ms)
Jul 20 21:30:42.970: INFO: (14) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.810238ms)
Jul 20 21:30:42.970: INFO: (14) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 4.722216ms)
Jul 20 21:30:42.971: INFO: (14) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.819439ms)
Jul 20 21:30:42.973: INFO: (15) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 1.91872ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.786524ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 2.794449ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.977443ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.945697ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 2.964794ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.945158ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 3.000837ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.038576ms)
Jul 20 21:30:42.974: INFO: (15) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test (200; 2.744441ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.941095ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 3.075433ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.965163ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.230333ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 3.260382ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.236509ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.359628ms)
Jul 20 21:30:42.979: INFO: (16) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: ... (200; 3.292321ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.828943ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.931692ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 3.990634ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 3.996577ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 3.990107ms)
Jul 20 21:30:42.980: INFO: (16) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 4.084789ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.43776ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.627344ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:1080/proxy/: test<... (200; 2.613858ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:460/proxy/: tls baz (200; 2.801936ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 2.736017ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 2.805323ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 2.976648ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.043271ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:462/proxy/: tls qux (200; 3.052768ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.414391ms)
Jul 20 21:30:42.983: INFO: (17) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.124048ms)
Jul 20 21:30:42.987: INFO: (18) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 3.399952ms)
Jul 20 21:30:42.987: INFO: (18) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 3.455951ms)
Jul 20 21:30:42.987: INFO: (18) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 3.438416ms)
Jul 20 21:30:42.987: INFO: (18) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.520934ms)
Jul 20 21:30:42.987: INFO: (18) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test (200; 3.551195ms)
Jul 20 21:30:42.988: INFO: (18) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname1/proxy/: foo (200; 3.631579ms)
Jul 20 21:30:42.988: INFO: (18) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 3.928428ms)
Jul 20 21:30:42.988: INFO: (18) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 3.964387ms)
Jul 20 21:30:42.988: INFO: (18) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname1/proxy/: tls baz (200; 4.430539ms)
Jul 20 21:30:42.988: INFO: (18) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.472768ms)
Jul 20 21:30:42.990: INFO: (19) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:162/proxy/: bar (200; 1.954299ms)
Jul 20 21:30:42.991: INFO: (19) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6/proxy/: test (200; 2.273413ms)
Jul 20 21:30:42.991: INFO: (19) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:1080/proxy/: ... (200; 2.288511ms)
Jul 20 21:30:42.991: INFO: (19) /api/v1/namespaces/proxy-5863/pods/proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 2.86831ms)
Jul 20 21:30:42.992: INFO: (19) /api/v1/namespaces/proxy-5863/pods/http:proxy-service-vwlb2-8fbk6:160/proxy/: foo (200; 3.439975ms)
Jul 20 21:30:42.992: INFO: (19) /api/v1/namespaces/proxy-5863/pods/https:proxy-service-vwlb2-8fbk6:443/proxy/: test<... (200; 3.785484ms)
Jul 20 21:30:42.993: INFO: (19) /api/v1/namespaces/proxy-5863/services/https:proxy-service-vwlb2:tlsportname2/proxy/: tls qux (200; 4.413645ms)
Jul 20 21:30:42.993: INFO: (19) /api/v1/namespaces/proxy-5863/services/http:proxy-service-vwlb2:portname2/proxy/: bar (200; 4.470357ms)
Jul 20 21:30:42.993: INFO: (19) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname1/proxy/: foo (200; 4.479474ms)
Jul 20 21:30:42.993: INFO: (19) /api/v1/namespaces/proxy-5863/services/proxy-service-vwlb2:portname2/proxy/: bar (200; 4.492042ms)
STEP: deleting ReplicationController proxy-service-vwlb2 in namespace proxy-5863, will wait for the garbage collector to delete the pods
Jul 20 21:30:43.051: INFO: Deleting ReplicationController proxy-service-vwlb2 took: 6.516493ms
Jul 20 21:30:43.351: INFO: Terminating ReplicationController proxy-service-vwlb2 pods took: 300.236015ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:30:57.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5863" for this suite.

• [SLOW TEST:25.149 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":136,"skipped":1937,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:30:57.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:30:57.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:31:01.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3024" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":1953,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:31:01.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 20 21:31:01.717: INFO: Waiting up to 5m0s for pod "pod-150e499e-4cdd-4381-9b69-85978d57eea4" in namespace "emptydir-8241" to be "success or failure"
Jul 20 21:31:01.735: INFO: Pod "pod-150e499e-4cdd-4381-9b69-85978d57eea4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.914466ms
Jul 20 21:31:03.826: INFO: Pod "pod-150e499e-4cdd-4381-9b69-85978d57eea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108948478s
Jul 20 21:31:05.829: INFO: Pod "pod-150e499e-4cdd-4381-9b69-85978d57eea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112315741s
STEP: Saw pod success
Jul 20 21:31:05.829: INFO: Pod "pod-150e499e-4cdd-4381-9b69-85978d57eea4" satisfied condition "success or failure"
Jul 20 21:31:05.831: INFO: Trying to get logs from node jerma-worker2 pod pod-150e499e-4cdd-4381-9b69-85978d57eea4 container test-container: 
STEP: delete the pod
Jul 20 21:31:05.860: INFO: Waiting for pod pod-150e499e-4cdd-4381-9b69-85978d57eea4 to disappear
Jul 20 21:31:05.865: INFO: Pod pod-150e499e-4cdd-4381-9b69-85978d57eea4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:31:05.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8241" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":1983,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:31:05.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul 20 21:31:06.004: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 21:31:08.913: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:31:18.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5955" for this suite.

• [SLOW TEST:12.381 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":139,"skipped":2032,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:31:18.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5641
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul 20 21:31:18.473: INFO: Found 0 stateful pods, waiting for 3
Jul 20 21:31:28.478: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:31:28.478: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:31:28.478: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:31:28.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5641 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:31:28.763: INFO: stderr: "I0720 21:31:28.611600    3035 log.go:172] (0xc0003c0000) (0xc000902000) Create stream\nI0720 21:31:28.611666    3035 log.go:172] (0xc0003c0000) (0xc000902000) Stream added, broadcasting: 1\nI0720 21:31:28.617096    3035 log.go:172] (0xc0003c0000) Reply frame received for 1\nI0720 21:31:28.617151    3035 log.go:172] (0xc0003c0000) (0xc0005a85a0) Create stream\nI0720 21:31:28.617174    3035 log.go:172] (0xc0003c0000) (0xc0005a85a0) Stream added, broadcasting: 3\nI0720 21:31:28.618236    3035 log.go:172] (0xc0003c0000) Reply frame received for 3\nI0720 21:31:28.618272    3035 log.go:172] (0xc0003c0000) (0xc0009020a0) Create stream\nI0720 21:31:28.618283    3035 log.go:172] (0xc0003c0000) (0xc0009020a0) Stream added, broadcasting: 5\nI0720 21:31:28.619254    3035 log.go:172] (0xc0003c0000) Reply frame received for 5\nI0720 21:31:28.709878    3035 log.go:172] (0xc0003c0000) Data frame received for 5\nI0720 21:31:28.709909    3035 log.go:172] (0xc0009020a0) (5) Data frame handling\nI0720 21:31:28.709931    3035 log.go:172] (0xc0009020a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:31:28.756202    3035 log.go:172] (0xc0003c0000) Data frame received for 3\nI0720 21:31:28.756254    3035 log.go:172] (0xc0005a85a0) (3) Data frame handling\nI0720 21:31:28.756285    3035 log.go:172] (0xc0005a85a0) (3) Data frame sent\nI0720 21:31:28.756473    3035 log.go:172] (0xc0003c0000) Data frame received for 5\nI0720 21:31:28.756498    3035 log.go:172] (0xc0009020a0) (5) Data frame handling\nI0720 21:31:28.756606    3035 log.go:172] (0xc0003c0000) Data frame received for 3\nI0720 21:31:28.756647    3035 log.go:172] (0xc0005a85a0) (3) Data frame handling\nI0720 21:31:28.758568    3035 log.go:172] (0xc0003c0000) Data frame received for 1\nI0720 21:31:28.758637    3035 log.go:172] (0xc000902000) (1) Data frame handling\nI0720 21:31:28.758664    3035 log.go:172] (0xc000902000) (1) Data frame sent\nI0720 21:31:28.758679    3035 log.go:172] (0xc0003c0000) (0xc000902000) Stream removed, broadcasting: 1\nI0720 21:31:28.758757    3035 log.go:172] (0xc0003c0000) Go away received\nI0720 21:31:28.758992    3035 log.go:172] (0xc0003c0000) (0xc000902000) Stream removed, broadcasting: 1\nI0720 21:31:28.759009    3035 log.go:172] (0xc0003c0000) (0xc0005a85a0) Stream removed, broadcasting: 3\nI0720 21:31:28.759016    3035 log.go:172] (0xc0003c0000) (0xc0009020a0) Stream removed, broadcasting: 5\n"
Jul 20 21:31:28.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:31:28.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 20 21:31:38.793: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 20 21:31:48.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5641 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:31:49.054: INFO: stderr: "I0720 21:31:48.953710    3057 log.go:172] (0xc000b42840) (0xc00090a000) Create stream\nI0720 21:31:48.953768    3057 log.go:172] (0xc000b42840) (0xc00090a000) Stream added, broadcasting: 1\nI0720 21:31:48.956395    3057 log.go:172] (0xc000b42840) Reply frame received for 1\nI0720 21:31:48.956438    3057 log.go:172] (0xc000b42840) (0xc000737ae0) Create stream\nI0720 21:31:48.956449    3057 log.go:172] (0xc000b42840) (0xc000737ae0) Stream added, broadcasting: 3\nI0720 21:31:48.957615    3057 log.go:172] (0xc000b42840) Reply frame received for 3\nI0720 21:31:48.957657    3057 log.go:172] (0xc000b42840) (0xc00090a0a0) Create stream\nI0720 21:31:48.957671    3057 log.go:172] (0xc000b42840) (0xc00090a0a0) Stream added, broadcasting: 5\nI0720 21:31:48.958654    3057 log.go:172] (0xc000b42840) Reply frame received for 5\nI0720 21:31:49.048685    3057 log.go:172] (0xc000b42840) Data frame received for 5\nI0720 21:31:49.048849    3057 log.go:172] (0xc00090a0a0) (5) Data frame handling\nI0720 21:31:49.048880    3057 log.go:172] (0xc00090a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:31:49.048914    3057 log.go:172] (0xc000b42840) Data frame received for 3\nI0720 21:31:49.048933    3057 log.go:172] (0xc000737ae0) (3) Data frame handling\nI0720 21:31:49.048952    3057 log.go:172] (0xc000737ae0) (3) Data frame sent\nI0720 21:31:49.048971    3057 log.go:172] (0xc000b42840) Data frame received for 3\nI0720 21:31:49.048986    3057 log.go:172] (0xc000737ae0) (3) Data frame handling\nI0720 21:31:49.049010    3057 log.go:172] (0xc000b42840) Data frame received for 5\nI0720 21:31:49.049028    3057 log.go:172] (0xc00090a0a0) (5) Data frame handling\nI0720 21:31:49.050640    3057 log.go:172] (0xc000b42840) Data frame received for 1\nI0720 21:31:49.050656    3057 log.go:172] (0xc00090a000) (1) Data frame handling\nI0720 21:31:49.050665    3057 log.go:172] (0xc00090a000) (1) Data frame sent\nI0720 21:31:49.050724    3057 log.go:172] (0xc000b42840) (0xc00090a000) Stream removed, broadcasting: 1\nI0720 21:31:49.050771    3057 log.go:172] (0xc000b42840) Go away received\nI0720 21:31:49.051041    3057 log.go:172] (0xc000b42840) (0xc00090a000) Stream removed, broadcasting: 1\nI0720 21:31:49.051058    3057 log.go:172] (0xc000b42840) (0xc000737ae0) Stream removed, broadcasting: 3\nI0720 21:31:49.051066    3057 log.go:172] (0xc000b42840) (0xc00090a0a0) Stream removed, broadcasting: 5\n"
Jul 20 21:31:49.054: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:31:49.054: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

STEP: Rolling back to a previous revision
Jul 20 21:32:19.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5641 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 20 21:32:19.369: INFO: stderr: "I0720 21:32:19.203406    3078 log.go:172] (0xc000646a50) (0xc000634000) Create stream\nI0720 21:32:19.203465    3078 log.go:172] (0xc000646a50) (0xc000634000) Stream added, broadcasting: 1\nI0720 21:32:19.205938    3078 log.go:172] (0xc000646a50) Reply frame received for 1\nI0720 21:32:19.205983    3078 log.go:172] (0xc000646a50) (0xc0006340a0) Create stream\nI0720 21:32:19.205997    3078 log.go:172] (0xc000646a50) (0xc0006340a0) Stream added, broadcasting: 3\nI0720 21:32:19.206746    3078 log.go:172] (0xc000646a50) Reply frame received for 3\nI0720 21:32:19.206780    3078 log.go:172] (0xc000646a50) (0xc0008fa000) Create stream\nI0720 21:32:19.206794    3078 log.go:172] (0xc000646a50) (0xc0008fa000) Stream added, broadcasting: 5\nI0720 21:32:19.207571    3078 log.go:172] (0xc000646a50) Reply frame received for 5\nI0720 21:32:19.309414    3078 log.go:172] (0xc000646a50) Data frame received for 5\nI0720 21:32:19.309446    3078 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0720 21:32:19.309468    3078 log.go:172] (0xc0008fa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0720 21:32:19.361155    3078 log.go:172] (0xc000646a50) Data frame received for 3\nI0720 21:32:19.361196    3078 log.go:172] (0xc0006340a0) (3) Data frame handling\nI0720 21:32:19.361225    3078 log.go:172] (0xc0006340a0) (3) Data frame sent\nI0720 21:32:19.361241    3078 log.go:172] (0xc000646a50) Data frame received for 3\nI0720 21:32:19.361255    3078 log.go:172] (0xc0006340a0) (3) Data frame handling\nI0720 21:32:19.361438    3078 log.go:172] (0xc000646a50) Data frame received for 5\nI0720 21:32:19.361464    3078 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0720 21:32:19.362923    3078 log.go:172] (0xc000646a50) Data frame received for 1\nI0720 21:32:19.362942    3078 log.go:172] (0xc000634000) (1) Data frame handling\nI0720 21:32:19.362953    3078 log.go:172] (0xc000634000) (1) Data frame sent\nI0720 21:32:19.362977    3078 log.go:172] (0xc000646a50) (0xc000634000) Stream removed, broadcasting: 1\nI0720 21:32:19.362995    3078 log.go:172] (0xc000646a50) Go away received\nI0720 21:32:19.363505    3078 log.go:172] (0xc000646a50) (0xc000634000) Stream removed, broadcasting: 1\nI0720 21:32:19.363548    3078 log.go:172] (0xc000646a50) (0xc0006340a0) Stream removed, broadcasting: 3\nI0720 21:32:19.363562    3078 log.go:172] (0xc000646a50) (0xc0008fa000) Stream removed, broadcasting: 5\n"
Jul 20 21:32:19.369: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 20 21:32:19.369: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 20 21:32:29.400: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 20 21:32:39.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5641 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 20 21:32:39.675: INFO: stderr: "I0720 21:32:39.573219    3102 log.go:172] (0xc000104e70) (0xc0009b8000) Create stream\nI0720 21:32:39.573274    3102 log.go:172] (0xc000104e70) (0xc0009b8000) Stream added, broadcasting: 1\nI0720 21:32:39.576231    3102 log.go:172] (0xc000104e70) Reply frame received for 1\nI0720 21:32:39.576279    3102 log.go:172] (0xc000104e70) (0xc0006dda40) Create stream\nI0720 21:32:39.576293    3102 log.go:172] (0xc000104e70) (0xc0006dda40) Stream added, broadcasting: 3\nI0720 21:32:39.577456    3102 log.go:172] (0xc000104e70) Reply frame received for 3\nI0720 21:32:39.577495    3102 log.go:172] (0xc000104e70) (0xc0009b80a0) Create stream\nI0720 21:32:39.577507    3102 log.go:172] (0xc000104e70) (0xc0009b80a0) Stream added, broadcasting: 5\nI0720 21:32:39.578557    3102 log.go:172] (0xc000104e70) Reply frame received for 5\nI0720 21:32:39.668382    3102 log.go:172] (0xc000104e70) Data frame received for 3\nI0720 21:32:39.668425    3102 log.go:172] (0xc0006dda40) (3) Data frame handling\nI0720 21:32:39.668436    3102 log.go:172] (0xc0006dda40) (3) Data frame sent\nI0720 21:32:39.668445    3102 log.go:172] (0xc000104e70) Data frame received for 3\nI0720 21:32:39.668451    3102 log.go:172] (0xc0006dda40) (3) Data frame handling\nI0720 21:32:39.668477    3102 log.go:172] (0xc000104e70) Data frame received for 5\nI0720 21:32:39.668483    3102 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0720 21:32:39.668489    3102 log.go:172] (0xc0009b80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0720 21:32:39.668506    3102 log.go:172] (0xc000104e70) Data frame received for 5\nI0720 21:32:39.668559    3102 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0720 21:32:39.670120    3102 log.go:172] (0xc000104e70) Data frame received for 1\nI0720 21:32:39.670155    3102 log.go:172] (0xc0009b8000) (1) Data frame handling\nI0720 21:32:39.670173    3102 log.go:172] (0xc0009b8000) (1) Data frame sent\nI0720 21:32:39.670192    3102 log.go:172] (0xc000104e70) (0xc0009b8000) Stream removed, broadcasting: 1\nI0720 21:32:39.670210    3102 log.go:172] (0xc000104e70) Go away received\nI0720 21:32:39.670649    3102 log.go:172] (0xc000104e70) (0xc0009b8000) Stream removed, broadcasting: 1\nI0720 21:32:39.670675    3102 log.go:172] (0xc000104e70) (0xc0006dda40) Stream removed, broadcasting: 3\nI0720 21:32:39.670689    3102 log.go:172] (0xc000104e70) (0xc0009b80a0) Stream removed, broadcasting: 5\n"
Jul 20 21:32:39.675: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 20 21:32:39.675: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 20 21:32:49.697: INFO: Waiting for StatefulSet statefulset-5641/ss2 to complete update
Jul 20 21:32:49.698: INFO: Waiting for Pod statefulset-5641/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 21:32:49.698: INFO: Waiting for Pod statefulset-5641/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 21:32:59.705: INFO: Waiting for StatefulSet statefulset-5641/ss2 to complete update
Jul 20 21:32:59.705: INFO: Waiting for Pod statefulset-5641/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 20 21:33:09.729: INFO: Waiting for StatefulSet statefulset-5641/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 20 21:33:19.721: INFO: Deleting all statefulset in ns statefulset-5641
Jul 20 21:33:19.724: INFO: Scaling statefulset ss2 to 0
Jul 20 21:33:39.749: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:33:39.752: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:33:39.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5641" for this suite.

• [SLOW TEST:141.439 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":140,"skipped":2055,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:33:39.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Jul 20 21:33:39.829: INFO: Waiting up to 5m0s for pod "client-containers-0140390b-c420-45f0-84f4-62950a938bcc" in namespace "containers-326" to be "success or failure"
Jul 20 21:33:39.852: INFO: Pod "client-containers-0140390b-c420-45f0-84f4-62950a938bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.757065ms
Jul 20 21:33:41.856: INFO: Pod "client-containers-0140390b-c420-45f0-84f4-62950a938bcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026461516s
Jul 20 21:33:43.862: INFO: Pod "client-containers-0140390b-c420-45f0-84f4-62950a938bcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033313412s
STEP: Saw pod success
Jul 20 21:33:43.862: INFO: Pod "client-containers-0140390b-c420-45f0-84f4-62950a938bcc" satisfied condition "success or failure"
Jul 20 21:33:43.865: INFO: Trying to get logs from node jerma-worker2 pod client-containers-0140390b-c420-45f0-84f4-62950a938bcc container test-container: 
STEP: delete the pod
Jul 20 21:33:43.894: INFO: Waiting for pod client-containers-0140390b-c420-45f0-84f4-62950a938bcc to disappear
Jul 20 21:33:43.898: INFO: Pod client-containers-0140390b-c420-45f0-84f4-62950a938bcc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:33:43.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-326" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2055,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:33:43.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:33:44.002: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:33:45.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3306" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":142,"skipped":2105,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:33:45.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:33:45.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9588
I0720 21:33:45.221765       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9588, replica count: 1
I0720 21:33:46.272148       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:33:47.272357       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:33:48.272594       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:33:49.272941       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 21:33:49.415: INFO: Created: latency-svc-msjbk
Jul 20 21:33:49.446: INFO: Got endpoints: latency-svc-msjbk [72.794493ms]
Jul 20 21:33:49.499: INFO: Created: latency-svc-ln24t
Jul 20 21:33:49.523: INFO: Got endpoints: latency-svc-ln24t [76.750378ms]
Jul 20 21:33:49.523: INFO: Created: latency-svc-g8gbh
Jul 20 21:33:49.534: INFO: Got endpoints: latency-svc-g8gbh [88.814069ms]
Jul 20 21:33:49.551: INFO: Created: latency-svc-mbj6v
Jul 20 21:33:49.567: INFO: Got endpoints: latency-svc-mbj6v [121.236933ms]
Jul 20 21:33:49.587: INFO: Created: latency-svc-bbc8r
Jul 20 21:33:49.625: INFO: Got endpoints: latency-svc-bbc8r [179.039479ms]
Jul 20 21:33:49.629: INFO: Created: latency-svc-lrh2j
Jul 20 21:33:49.643: INFO: Got endpoints: latency-svc-lrh2j [197.611089ms]
Jul 20 21:33:49.665: INFO: Created: latency-svc-fxx7v
Jul 20 21:33:49.674: INFO: Got endpoints: latency-svc-fxx7v [228.524037ms]
Jul 20 21:33:49.696: INFO: Created: latency-svc-pg7q2
Jul 20 21:33:49.712: INFO: Got endpoints: latency-svc-pg7q2 [266.140365ms]
Jul 20 21:33:49.799: INFO: Created: latency-svc-htql4
Jul 20 21:33:49.802: INFO: Got endpoints: latency-svc-htql4 [356.261614ms]
Jul 20 21:33:49.857: INFO: Created: latency-svc-h2jww
Jul 20 21:33:49.872: INFO: Got endpoints: latency-svc-h2jww [426.078667ms]
Jul 20 21:33:49.893: INFO: Created: latency-svc-kqhd8
Jul 20 21:33:49.960: INFO: Got endpoints: latency-svc-kqhd8 [513.970292ms]
Jul 20 21:33:49.962: INFO: Created: latency-svc-jhcpd
Jul 20 21:33:49.981: INFO: Got endpoints: latency-svc-jhcpd [534.914313ms]
Jul 20 21:33:50.001: INFO: Created: latency-svc-zlzbf
Jul 20 21:33:50.013: INFO: Got endpoints: latency-svc-zlzbf [567.245585ms]
Jul 20 21:33:50.036: INFO: Created: latency-svc-dmlqm
Jul 20 21:33:50.043: INFO: Got endpoints: latency-svc-dmlqm [596.897029ms]
Jul 20 21:33:50.098: INFO: Created: latency-svc-k5g7v
Jul 20 21:33:50.102: INFO: Got endpoints: latency-svc-k5g7v [655.781326ms]
Jul 20 21:33:50.157: INFO: Created: latency-svc-56bnm
Jul 20 21:33:50.278: INFO: Got endpoints: latency-svc-56bnm [831.766136ms]
Jul 20 21:33:50.281: INFO: Created: latency-svc-6xpt6
Jul 20 21:33:50.286: INFO: Got endpoints: latency-svc-6xpt6 [763.569474ms]
Jul 20 21:33:50.338: INFO: Created: latency-svc-whb9f
Jul 20 21:33:50.371: INFO: Got endpoints: latency-svc-whb9f [836.052865ms]
Jul 20 21:33:50.435: INFO: Created: latency-svc-j6p64
Jul 20 21:33:50.440: INFO: Got endpoints: latency-svc-j6p64 [873.291478ms]
Jul 20 21:33:50.469: INFO: Created: latency-svc-sr9gk
Jul 20 21:33:50.483: INFO: Got endpoints: latency-svc-sr9gk [857.740484ms]
Jul 20 21:33:50.505: INFO: Created: latency-svc-6s29p
Jul 20 21:33:50.519: INFO: Got endpoints: latency-svc-6s29p [875.319446ms]
Jul 20 21:33:50.571: INFO: Created: latency-svc-c5dkd
Jul 20 21:33:50.574: INFO: Got endpoints: latency-svc-c5dkd [899.955259ms]
Jul 20 21:33:50.608: INFO: Created: latency-svc-phgqj
Jul 20 21:33:50.634: INFO: Got endpoints: latency-svc-phgqj [921.665136ms]
Jul 20 21:33:50.650: INFO: Created: latency-svc-n5v5z
Jul 20 21:33:50.664: INFO: Got endpoints: latency-svc-n5v5z [861.816341ms]
Jul 20 21:33:50.710: INFO: Created: latency-svc-ssnc4
Jul 20 21:33:50.730: INFO: Got endpoints: latency-svc-ssnc4 [857.492456ms]
Jul 20 21:33:50.757: INFO: Created: latency-svc-vlm7r
Jul 20 21:33:50.772: INFO: Got endpoints: latency-svc-vlm7r [811.837902ms]
Jul 20 21:33:50.798: INFO: Created: latency-svc-67dzn
Jul 20 21:33:50.870: INFO: Got endpoints: latency-svc-67dzn [889.207665ms]
Jul 20 21:33:50.873: INFO: Created: latency-svc-5j5kf
Jul 20 21:33:50.896: INFO: Got endpoints: latency-svc-5j5kf [882.608534ms]
Jul 20 21:33:50.938: INFO: Created: latency-svc-mnjsl
Jul 20 21:33:51.008: INFO: Got endpoints: latency-svc-mnjsl [964.968436ms]
Jul 20 21:33:51.038: INFO: Created: latency-svc-crvdf
Jul 20 21:33:51.062: INFO: Got endpoints: latency-svc-crvdf [960.419596ms]
Jul 20 21:33:51.088: INFO: Created: latency-svc-8qh5s
Jul 20 21:33:51.097: INFO: Got endpoints: latency-svc-8qh5s [819.557291ms]
Jul 20 21:33:51.158: INFO: Created: latency-svc-lpd9w
Jul 20 21:33:51.182: INFO: Got endpoints: latency-svc-lpd9w [895.568521ms]
Jul 20 21:33:51.224: INFO: Created: latency-svc-p5zg5
Jul 20 21:33:51.290: INFO: Got endpoints: latency-svc-p5zg5 [919.386976ms]
Jul 20 21:33:51.340: INFO: Created: latency-svc-dlk82
Jul 20 21:33:51.355: INFO: Got endpoints: latency-svc-dlk82 [915.097832ms]
Jul 20 21:33:51.430: INFO: Created: latency-svc-w6gvk
Jul 20 21:33:51.444: INFO: Got endpoints: latency-svc-w6gvk [961.541279ms]
Jul 20 21:33:51.494: INFO: Created: latency-svc-8r6p2
Jul 20 21:33:51.504: INFO: Got endpoints: latency-svc-8r6p2 [984.940332ms]
Jul 20 21:33:51.565: INFO: Created: latency-svc-dfpjs
Jul 20 21:33:51.591: INFO: Created: latency-svc-tdfpj
Jul 20 21:33:51.591: INFO: Got endpoints: latency-svc-dfpjs [1.016439042s]
Jul 20 21:33:51.601: INFO: Got endpoints: latency-svc-tdfpj [967.20159ms]
Jul 20 21:33:51.664: INFO: Created: latency-svc-smcd7
Jul 20 21:33:51.733: INFO: Got endpoints: latency-svc-smcd7 [1.068746266s]
Jul 20 21:33:51.734: INFO: Created: latency-svc-mrdrw
Jul 20 21:33:51.746: INFO: Got endpoints: latency-svc-mrdrw [1.015983192s]
Jul 20 21:33:51.776: INFO: Created: latency-svc-tl97j
Jul 20 21:33:51.807: INFO: Got endpoints: latency-svc-tl97j [1.034820052s]
Jul 20 21:33:51.870: INFO: Created: latency-svc-77mxv
Jul 20 21:33:51.896: INFO: Got endpoints: latency-svc-77mxv [1.025672965s]
Jul 20 21:33:51.934: INFO: Created: latency-svc-ck454
Jul 20 21:33:51.951: INFO: Got endpoints: latency-svc-ck454 [1.055489604s]
Jul 20 21:33:52.038: INFO: Created: latency-svc-g6rpm
Jul 20 21:33:52.046: INFO: Got endpoints: latency-svc-g6rpm [1.038208686s]
Jul 20 21:33:52.095: INFO: Created: latency-svc-dvc8r
Jul 20 21:33:52.118: INFO: Got endpoints: latency-svc-dvc8r [1.056013464s]
Jul 20 21:33:52.182: INFO: Created: latency-svc-fmspj
Jul 20 21:33:52.187: INFO: Got endpoints: latency-svc-fmspj [1.089845782s]
Jul 20 21:33:52.281: INFO: Created: latency-svc-znjqv
Jul 20 21:33:52.376: INFO: Created: latency-svc-ktgq7
Jul 20 21:33:52.376: INFO: Got endpoints: latency-svc-znjqv [1.194341348s]
Jul 20 21:33:52.418: INFO: Got endpoints: latency-svc-ktgq7 [1.127764552s]
Jul 20 21:33:52.517: INFO: Created: latency-svc-vjqgf
Jul 20 21:33:52.519: INFO: Created: latency-svc-fh55z
Jul 20 21:33:52.528: INFO: Got endpoints: latency-svc-vjqgf [1.172147194s]
Jul 20 21:33:52.528: INFO: Got endpoints: latency-svc-fh55z [1.083418604s]
Jul 20 21:33:52.546: INFO: Created: latency-svc-s7js9
Jul 20 21:33:52.558: INFO: Got endpoints: latency-svc-s7js9 [1.053917291s]
Jul 20 21:33:52.610: INFO: Created: latency-svc-t5hgp
Jul 20 21:33:52.643: INFO: Got endpoints: latency-svc-t5hgp [1.051787421s]
Jul 20 21:33:52.675: INFO: Created: latency-svc-grsr6
Jul 20 21:33:52.690: INFO: Got endpoints: latency-svc-grsr6 [1.089076284s]
Jul 20 21:33:52.719: INFO: Created: latency-svc-kbgjt
Jul 20 21:33:52.732: INFO: Got endpoints: latency-svc-kbgjt [999.60868ms]
Jul 20 21:33:52.798: INFO: Created: latency-svc-pggqr
Jul 20 21:33:52.802: INFO: Got endpoints: latency-svc-pggqr [1.056210066s]
Jul 20 21:33:52.857: INFO: Created: latency-svc-fnlmc
Jul 20 21:33:52.895: INFO: Got endpoints: latency-svc-fnlmc [1.08792491s]
Jul 20 21:33:52.966: INFO: Created: latency-svc-7qggb
Jul 20 21:33:52.981: INFO: Got endpoints: latency-svc-7qggb [1.085269667s]
Jul 20 21:33:53.007: INFO: Created: latency-svc-px7mn
Jul 20 21:33:53.031: INFO: Got endpoints: latency-svc-px7mn [1.079776959s]
Jul 20 21:33:53.061: INFO: Created: latency-svc-bk8zb
Jul 20 21:33:53.127: INFO: Got endpoints: latency-svc-bk8zb [1.081214535s]
Jul 20 21:33:53.129: INFO: Created: latency-svc-mshnv
Jul 20 21:33:53.142: INFO: Got endpoints: latency-svc-mshnv [1.023240535s]
Jul 20 21:33:53.167: INFO: Created: latency-svc-nl2ph
Jul 20 21:33:53.181: INFO: Got endpoints: latency-svc-nl2ph [993.276322ms]
Jul 20 21:33:53.203: INFO: Created: latency-svc-xzmdc
Jul 20 21:33:53.221: INFO: Got endpoints: latency-svc-xzmdc [844.046874ms]
Jul 20 21:33:53.295: INFO: Created: latency-svc-hqp5s
Jul 20 21:33:53.330: INFO: Got endpoints: latency-svc-hqp5s [912.500551ms]
Jul 20 21:33:53.391: INFO: Created: latency-svc-lhz94
Jul 20 21:33:53.469: INFO: Got endpoints: latency-svc-lhz94 [941.084054ms]
Jul 20 21:33:53.471: INFO: Created: latency-svc-xj277
Jul 20 21:33:53.477: INFO: Got endpoints: latency-svc-xj277 [948.852934ms]
Jul 20 21:33:53.503: INFO: Created: latency-svc-t4s4w
Jul 20 21:33:53.533: INFO: Got endpoints: latency-svc-t4s4w [975.499234ms]
Jul 20 21:33:53.559: INFO: Created: latency-svc-lhj8r
Jul 20 21:33:53.595: INFO: Got endpoints: latency-svc-lhj8r [952.068152ms]
Jul 20 21:33:53.612: INFO: Created: latency-svc-86926
Jul 20 21:33:53.629: INFO: Got endpoints: latency-svc-86926 [939.220286ms]
Jul 20 21:33:53.648: INFO: Created: latency-svc-lhjzm
Jul 20 21:33:53.657: INFO: Got endpoints: latency-svc-lhjzm [924.928488ms]
Jul 20 21:33:53.683: INFO: Created: latency-svc-cb942
Jul 20 21:33:53.732: INFO: Got endpoints: latency-svc-cb942 [930.258777ms]
Jul 20 21:33:53.749: INFO: Created: latency-svc-25m7d
Jul 20 21:33:53.778: INFO: Got endpoints: latency-svc-25m7d [883.262933ms]
Jul 20 21:33:53.810: INFO: Created: latency-svc-9pdtg
Jul 20 21:33:53.871: INFO: Got endpoints: latency-svc-9pdtg [889.891621ms]
Jul 20 21:33:53.894: INFO: Created: latency-svc-4zf9c
Jul 20 21:33:53.905: INFO: Got endpoints: latency-svc-4zf9c [873.285449ms]
Jul 20 21:33:53.925: INFO: Created: latency-svc-v945g
Jul 20 21:33:53.934: INFO: Got endpoints: latency-svc-v945g [806.701489ms]
Jul 20 21:33:53.959: INFO: Created: latency-svc-x89pf
Jul 20 21:33:54.032: INFO: Got endpoints: latency-svc-x89pf [890.060775ms]
Jul 20 21:33:54.036: INFO: Created: latency-svc-swlbs
Jul 20 21:33:54.049: INFO: Got endpoints: latency-svc-swlbs [868.372882ms]
Jul 20 21:33:54.080: INFO: Created: latency-svc-mvjq7
Jul 20 21:33:54.116: INFO: Got endpoints: latency-svc-mvjq7 [894.90906ms]
Jul 20 21:33:54.188: INFO: Created: latency-svc-cqgtw
Jul 20 21:33:54.199: INFO: Got endpoints: latency-svc-cqgtw [869.07834ms]
Jul 20 21:33:54.218: INFO: Created: latency-svc-lxxr6
Jul 20 21:33:54.237: INFO: Got endpoints: latency-svc-lxxr6 [767.895098ms]
Jul 20 21:33:54.331: INFO: Created: latency-svc-tjvh2
Jul 20 21:33:54.334: INFO: Got endpoints: latency-svc-tjvh2 [857.486688ms]
Jul 20 21:33:54.398: INFO: Created: latency-svc-pr5gt
Jul 20 21:33:54.411: INFO: Got endpoints: latency-svc-pr5gt [877.531135ms]
Jul 20 21:33:54.475: INFO: Created: latency-svc-92dl5
Jul 20 21:33:54.478: INFO: Got endpoints: latency-svc-92dl5 [883.786365ms]
Jul 20 21:33:54.504: INFO: Created: latency-svc-kn9m2
Jul 20 21:33:54.519: INFO: Got endpoints: latency-svc-kn9m2 [889.775329ms]
Jul 20 21:33:54.541: INFO: Created: latency-svc-cxd9t
Jul 20 21:33:54.555: INFO: Got endpoints: latency-svc-cxd9t [898.094747ms]
Jul 20 21:33:54.613: INFO: Created: latency-svc-zmz65
Jul 20 21:33:54.617: INFO: Got endpoints: latency-svc-zmz65 [884.431243ms]
Jul 20 21:33:54.643: INFO: Created: latency-svc-ct8r2
Jul 20 21:33:54.658: INFO: Got endpoints: latency-svc-ct8r2 [879.73812ms]
Jul 20 21:33:54.683: INFO: Created: latency-svc-h4gt9
Jul 20 21:33:54.768: INFO: Got endpoints: latency-svc-h4gt9 [897.17333ms]
Jul 20 21:33:54.776: INFO: Created: latency-svc-b7djt
Jul 20 21:33:54.790: INFO: Got endpoints: latency-svc-b7djt [885.778074ms]
Jul 20 21:33:54.828: INFO: Created: latency-svc-qb9hm
Jul 20 21:33:54.845: INFO: Got endpoints: latency-svc-qb9hm [910.58736ms]
Jul 20 21:33:54.865: INFO: Created: latency-svc-9zczt
Jul 20 21:33:54.906: INFO: Got endpoints: latency-svc-9zczt [874.170506ms]
Jul 20 21:33:54.938: INFO: Created: latency-svc-wnqsh
Jul 20 21:33:54.954: INFO: Got endpoints: latency-svc-wnqsh [904.437662ms]
Jul 20 21:33:54.974: INFO: Created: latency-svc-fn8rq
Jul 20 21:33:54.992: INFO: Got endpoints: latency-svc-fn8rq [875.944817ms]
Jul 20 21:33:55.051: INFO: Created: latency-svc-gkqbz
Jul 20 21:33:55.053: INFO: Got endpoints: latency-svc-gkqbz [853.511256ms]
Jul 20 21:33:55.086: INFO: Created: latency-svc-7mc7c
Jul 20 21:33:55.098: INFO: Got endpoints: latency-svc-7mc7c [861.4734ms]
Jul 20 21:33:55.117: INFO: Created: latency-svc-gndvf
Jul 20 21:33:55.147: INFO: Got endpoints: latency-svc-gndvf [812.308754ms]
Jul 20 21:33:55.202: INFO: Created: latency-svc-wh2ph
Jul 20 21:33:55.213: INFO: Got endpoints: latency-svc-wh2ph [801.963926ms]
Jul 20 21:33:55.232: INFO: Created: latency-svc-wxzlp
Jul 20 21:33:55.243: INFO: Got endpoints: latency-svc-wxzlp [764.444112ms]
Jul 20 21:33:55.273: INFO: Created: latency-svc-kpphw
Jul 20 21:33:55.313: INFO: Got endpoints: latency-svc-kpphw [794.084519ms]
Jul 20 21:33:55.368: INFO: Created: latency-svc-gmtm5
Jul 20 21:33:55.400: INFO: Got endpoints: latency-svc-gmtm5 [844.20156ms]
Jul 20 21:33:55.457: INFO: Created: latency-svc-6q966
Jul 20 21:33:55.472: INFO: Got endpoints: latency-svc-6q966 [855.341364ms]
Jul 20 21:33:55.496: INFO: Created: latency-svc-74c6z
Jul 20 21:33:55.508: INFO: Got endpoints: latency-svc-74c6z [850.151031ms]
Jul 20 21:33:55.532: INFO: Created: latency-svc-gvf5v
Jul 20 21:33:55.556: INFO: Got endpoints: latency-svc-gvf5v [787.182822ms]
Jul 20 21:33:55.609: INFO: Created: latency-svc-48pnq
Jul 20 21:33:55.617: INFO: Got endpoints: latency-svc-48pnq [826.033384ms]
Jul 20 21:33:55.638: INFO: Created: latency-svc-nszv7
Jul 20 21:33:55.653: INFO: Got endpoints: latency-svc-nszv7 [807.879705ms]
Jul 20 21:33:55.681: INFO: Created: latency-svc-5hd6x
Jul 20 21:33:55.696: INFO: Got endpoints: latency-svc-5hd6x [789.922919ms]
Jul 20 21:33:55.763: INFO: Created: latency-svc-xswwb
Jul 20 21:33:55.767: INFO: Got endpoints: latency-svc-xswwb [813.787338ms]
Jul 20 21:33:55.806: INFO: Created: latency-svc-dbv55
Jul 20 21:33:55.836: INFO: Got endpoints: latency-svc-dbv55 [844.354478ms]
Jul 20 21:33:55.901: INFO: Created: latency-svc-2t86w
Jul 20 21:33:55.904: INFO: Got endpoints: latency-svc-2t86w [850.455079ms]
Jul 20 21:33:55.928: INFO: Created: latency-svc-llsk9
Jul 20 21:33:55.942: INFO: Got endpoints: latency-svc-llsk9 [844.081827ms]
Jul 20 21:33:55.963: INFO: Created: latency-svc-jkm5z
Jul 20 21:33:55.973: INFO: Got endpoints: latency-svc-jkm5z [825.850944ms]
Jul 20 21:33:56.050: INFO: Created: latency-svc-86xxh
Jul 20 21:33:56.054: INFO: Got endpoints: latency-svc-86xxh [840.776765ms]
Jul 20 21:33:56.095: INFO: Created: latency-svc-f22lm
Jul 20 21:33:56.136: INFO: Got endpoints: latency-svc-f22lm [893.093488ms]
Jul 20 21:33:56.183: INFO: Created: latency-svc-5ggd6
Jul 20 21:33:56.209: INFO: Got endpoints: latency-svc-5ggd6 [896.017888ms]
Jul 20 21:33:56.212: INFO: Created: latency-svc-h668g
Jul 20 21:33:56.220: INFO: Got endpoints: latency-svc-h668g [819.880149ms]
Jul 20 21:33:56.240: INFO: Created: latency-svc-hqccr
Jul 20 21:33:56.263: INFO: Got endpoints: latency-svc-hqccr [791.03097ms]
Jul 20 21:33:56.355: INFO: Created: latency-svc-zjhgt
Jul 20 21:33:56.357: INFO: Created: latency-svc-x5kkb
Jul 20 21:33:56.376: INFO: Got endpoints: latency-svc-x5kkb [820.103012ms]
Jul 20 21:33:56.376: INFO: Got endpoints: latency-svc-zjhgt [867.738895ms]
Jul 20 21:33:56.407: INFO: Created: latency-svc-j72sq
Jul 20 21:33:56.419: INFO: Got endpoints: latency-svc-j72sq [802.318253ms]
Jul 20 21:33:56.449: INFO: Created: latency-svc-gqjm6
Jul 20 21:33:56.505: INFO: Got endpoints: latency-svc-gqjm6 [851.739429ms]
Jul 20 21:33:56.521: INFO: Created: latency-svc-qj9qb
Jul 20 21:33:56.534: INFO: Got endpoints: latency-svc-qj9qb [837.621056ms]
Jul 20 21:33:56.555: INFO: Created: latency-svc-jbjvm
Jul 20 21:33:56.570: INFO: Got endpoints: latency-svc-jbjvm [802.53616ms]
Jul 20 21:33:56.592: INFO: Created: latency-svc-zsx2f
Jul 20 21:33:56.636: INFO: Got endpoints: latency-svc-zsx2f [800.078303ms]
Jul 20 21:33:56.639: INFO: Created: latency-svc-f7t75
Jul 20 21:33:56.655: INFO: Got endpoints: latency-svc-f7t75 [751.237823ms]
Jul 20 21:33:56.707: INFO: Created: latency-svc-gkn94
Jul 20 21:33:56.721: INFO: Got endpoints: latency-svc-gkn94 [778.366367ms]
Jul 20 21:33:56.775: INFO: Created: latency-svc-th656
Jul 20 21:33:56.778: INFO: Got endpoints: latency-svc-th656 [805.22242ms]
Jul 20 21:33:56.833: INFO: Created: latency-svc-drt7g
Jul 20 21:33:56.847: INFO: Got endpoints: latency-svc-drt7g [793.244476ms]
Jul 20 21:33:56.931: INFO: Created: latency-svc-wrfdd
Jul 20 21:33:56.937: INFO: Got endpoints: latency-svc-wrfdd [800.494938ms]
Jul 20 21:33:56.958: INFO: Created: latency-svc-p5fxt
Jul 20 21:33:56.967: INFO: Got endpoints: latency-svc-p5fxt [757.367387ms]
Jul 20 21:33:56.987: INFO: Created: latency-svc-x827b
Jul 20 21:33:56.997: INFO: Got endpoints: latency-svc-x827b [777.328309ms]
Jul 20 21:33:57.019: INFO: Created: latency-svc-kn752
Jul 20 21:33:57.062: INFO: Got endpoints: latency-svc-kn752 [798.379835ms]
Jul 20 21:33:57.079: INFO: Created: latency-svc-s4vr9
Jul 20 21:33:57.094: INFO: Got endpoints: latency-svc-s4vr9 [717.993568ms]
Jul 20 21:33:57.115: INFO: Created: latency-svc-ld2sz
Jul 20 21:33:57.130: INFO: Got endpoints: latency-svc-ld2sz [754.076295ms]
Jul 20 21:33:57.150: INFO: Created: latency-svc-k87nt
Jul 20 21:33:57.161: INFO: Got endpoints: latency-svc-k87nt [741.686814ms]
Jul 20 21:33:57.212: INFO: Created: latency-svc-h9hb9
Jul 20 21:33:57.227: INFO: Got endpoints: latency-svc-h9hb9 [722.071562ms]
Jul 20 21:33:57.277: INFO: Created: latency-svc-sf646
Jul 20 21:33:57.293: INFO: Got endpoints: latency-svc-sf646 [759.119841ms]
Jul 20 21:33:57.385: INFO: Created: latency-svc-wxd6x
Jul 20 21:33:57.388: INFO: Got endpoints: latency-svc-wxd6x [817.548396ms]
Jul 20 21:33:57.467: INFO: Created: latency-svc-9b5c9
Jul 20 21:33:57.511: INFO: Got endpoints: latency-svc-9b5c9 [874.380996ms]
Jul 20 21:33:57.533: INFO: Created: latency-svc-l8dt5
Jul 20 21:33:57.545: INFO: Got endpoints: latency-svc-l8dt5 [890.659361ms]
Jul 20 21:33:57.570: INFO: Created: latency-svc-7v2nl
Jul 20 21:33:57.594: INFO: Got endpoints: latency-svc-7v2nl [873.723152ms]
Jul 20 21:33:57.655: INFO: Created: latency-svc-xqp5w
Jul 20 21:33:57.666: INFO: Got endpoints: latency-svc-xqp5w [888.113808ms]
Jul 20 21:33:57.683: INFO: Created: latency-svc-z8xhz
Jul 20 21:33:57.697: INFO: Got endpoints: latency-svc-z8xhz [849.497262ms]
Jul 20 21:33:57.725: INFO: Created: latency-svc-v78ss
Jul 20 21:33:57.822: INFO: Got endpoints: latency-svc-v78ss [885.416021ms]
Jul 20 21:33:57.841: INFO: Created: latency-svc-r4rkp
Jul 20 21:33:57.865: INFO: Got endpoints: latency-svc-r4rkp [898.10037ms]
Jul 20 21:33:57.882: INFO: Created: latency-svc-lsb5k
Jul 20 21:33:57.895: INFO: Got endpoints: latency-svc-lsb5k [898.130671ms]
Jul 20 21:33:57.912: INFO: Created: latency-svc-rlvmb
Jul 20 21:33:57.960: INFO: Got endpoints: latency-svc-rlvmb [898.540169ms]
Jul 20 21:33:57.970: INFO: Created: latency-svc-pgzk7
Jul 20 21:33:57.986: INFO: Got endpoints: latency-svc-pgzk7 [891.693898ms]
Jul 20 21:33:58.007: INFO: Created: latency-svc-ngh7z
Jul 20 21:33:58.022: INFO: Got endpoints: latency-svc-ngh7z [892.448676ms]
Jul 20 21:33:58.159: INFO: Created: latency-svc-595bk
Jul 20 21:33:58.172: INFO: Got endpoints: latency-svc-595bk [1.01110019s]
Jul 20 21:33:58.199: INFO: Created: latency-svc-mmxg9
Jul 20 21:33:58.208: INFO: Got endpoints: latency-svc-mmxg9 [981.152632ms]
Jul 20 21:33:58.235: INFO: Created: latency-svc-bttdv
Jul 20 21:33:58.244: INFO: Got endpoints: latency-svc-bttdv [951.291254ms]
Jul 20 21:33:58.307: INFO: Created: latency-svc-qcthh
Jul 20 21:33:58.316: INFO: Got endpoints: latency-svc-qcthh [928.490703ms]
Jul 20 21:33:58.338: INFO: Created: latency-svc-n6h8j
Jul 20 21:33:58.359: INFO: Got endpoints: latency-svc-n6h8j [847.953364ms]
Jul 20 21:33:58.403: INFO: Created: latency-svc-rzzj4
Jul 20 21:33:58.469: INFO: Got endpoints: latency-svc-rzzj4 [923.52125ms]
Jul 20 21:33:58.474: INFO: Created: latency-svc-246g8
Jul 20 21:33:58.491: INFO: Got endpoints: latency-svc-246g8 [896.324119ms]
Jul 20 21:33:58.517: INFO: Created: latency-svc-7k64h
Jul 20 21:33:58.529: INFO: Got endpoints: latency-svc-7k64h [862.486638ms]
Jul 20 21:33:58.554: INFO: Created: latency-svc-lwwhx
Jul 20 21:33:58.619: INFO: Got endpoints: latency-svc-lwwhx [922.25469ms]
Jul 20 21:33:58.621: INFO: Created: latency-svc-n5bn8
Jul 20 21:33:58.630: INFO: Got endpoints: latency-svc-n5bn8 [807.443585ms]
Jul 20 21:33:58.657: INFO: Created: latency-svc-nhr24
Jul 20 21:33:58.672: INFO: Got endpoints: latency-svc-nhr24 [807.111116ms]
Jul 20 21:33:58.697: INFO: Created: latency-svc-v25fm
Jul 20 21:33:58.793: INFO: Got endpoints: latency-svc-v25fm [897.445514ms]
Jul 20 21:33:58.796: INFO: Created: latency-svc-dtjj8
Jul 20 21:33:58.810: INFO: Got endpoints: latency-svc-dtjj8 [849.974248ms]
Jul 20 21:33:58.848: INFO: Created: latency-svc-q4qrz
Jul 20 21:33:58.871: INFO: Got endpoints: latency-svc-q4qrz [885.273087ms]
Jul 20 21:33:58.949: INFO: Created: latency-svc-wmgvs
Jul 20 21:33:58.962: INFO: Got endpoints: latency-svc-wmgvs [939.097157ms]
Jul 20 21:33:58.997: INFO: Created: latency-svc-c9htb
Jul 20 21:33:59.009: INFO: Got endpoints: latency-svc-c9htb [837.503618ms]
Jul 20 21:33:59.032: INFO: Created: latency-svc-nn689
Jul 20 21:33:59.045: INFO: Got endpoints: latency-svc-nn689 [836.975434ms]
Jul 20 21:33:59.098: INFO: Created: latency-svc-bvf45
Jul 20 21:33:59.130: INFO: Got endpoints: latency-svc-bvf45 [885.664668ms]
Jul 20 21:33:59.130: INFO: Created: latency-svc-v2l6z
Jul 20 21:33:59.145: INFO: Got endpoints: latency-svc-v2l6z [828.852999ms]
Jul 20 21:33:59.172: INFO: Created: latency-svc-fj2xz
Jul 20 21:33:59.187: INFO: Got endpoints: latency-svc-fj2xz [828.528522ms]
Jul 20 21:33:59.236: INFO: Created: latency-svc-292ls
Jul 20 21:33:59.238: INFO: Got endpoints: latency-svc-292ls [769.292396ms]
Jul 20 21:33:59.296: INFO: Created: latency-svc-pwf2k
Jul 20 21:33:59.307: INFO: Got endpoints: latency-svc-pwf2k [816.480586ms]
Jul 20 21:33:59.327: INFO: Created: latency-svc-qpctl
Jul 20 21:33:59.422: INFO: Got endpoints: latency-svc-qpctl [892.884923ms]
Jul 20 21:33:59.424: INFO: Created: latency-svc-7d92j
Jul 20 21:33:59.440: INFO: Got endpoints: latency-svc-7d92j [820.780512ms]
Jul 20 21:33:59.484: INFO: Created: latency-svc-bnprh
Jul 20 21:33:59.494: INFO: Got endpoints: latency-svc-bnprh [864.197892ms]
Jul 20 21:33:59.547: INFO: Created: latency-svc-lg9dk
Jul 20 21:33:59.549: INFO: Got endpoints: latency-svc-lg9dk [877.055973ms]
Jul 20 21:33:59.618: INFO: Created: latency-svc-96tbs
Jul 20 21:33:59.632: INFO: Got endpoints: latency-svc-96tbs [839.620952ms]
Jul 20 21:33:59.679: INFO: Created: latency-svc-b6jc9
Jul 20 21:33:59.686: INFO: Got endpoints: latency-svc-b6jc9 [876.201954ms]
Jul 20 21:33:59.713: INFO: Created: latency-svc-8cbtr
Jul 20 21:33:59.735: INFO: Got endpoints: latency-svc-8cbtr [864.09829ms]
Jul 20 21:33:59.772: INFO: Created: latency-svc-4npwt
Jul 20 21:33:59.840: INFO: Got endpoints: latency-svc-4npwt [878.789579ms]
Jul 20 21:33:59.841: INFO: Created: latency-svc-lfwfz
Jul 20 21:33:59.849: INFO: Got endpoints: latency-svc-lfwfz [839.66776ms]
Jul 20 21:33:59.908: INFO: Created: latency-svc-v6mjh
Jul 20 21:33:59.928: INFO: Got endpoints: latency-svc-v6mjh [882.789449ms]
Jul 20 21:33:59.974: INFO: Created: latency-svc-8zbn7
Jul 20 21:33:59.976: INFO: Got endpoints: latency-svc-8zbn7 [846.093793ms]
Jul 20 21:34:00.018: INFO: Created: latency-svc-cz7bk
Jul 20 21:34:00.115: INFO: Got endpoints: latency-svc-cz7bk [970.20705ms]
Jul 20 21:34:00.142: INFO: Created: latency-svc-szd7r
Jul 20 21:34:00.156: INFO: Got endpoints: latency-svc-szd7r [969.032866ms]
Jul 20 21:34:00.190: INFO: Created: latency-svc-lvsrj
Jul 20 21:34:00.204: INFO: Got endpoints: latency-svc-lvsrj [965.664159ms]
Jul 20 21:34:00.277: INFO: Created: latency-svc-mmw49
Jul 20 21:34:00.288: INFO: Got endpoints: latency-svc-mmw49 [980.283253ms]
Jul 20 21:34:00.335: INFO: Created: latency-svc-rdf7z
Jul 20 21:34:00.349: INFO: Got endpoints: latency-svc-rdf7z [926.923162ms]
Jul 20 21:34:00.421: INFO: Created: latency-svc-x4bzc
Jul 20 21:34:00.424: INFO: Got endpoints: latency-svc-x4bzc [984.040768ms]
Jul 20 21:34:00.454: INFO: Created: latency-svc-8rbpl
Jul 20 21:34:00.485: INFO: Got endpoints: latency-svc-8rbpl [991.114539ms]
Jul 20 21:34:00.566: INFO: Created: latency-svc-gtpql
Jul 20 21:34:00.593: INFO: Got endpoints: latency-svc-gtpql [1.044094203s]
Jul 20 21:34:00.594: INFO: Created: latency-svc-j97bq
Jul 20 21:34:00.614: INFO: Got endpoints: latency-svc-j97bq [981.001501ms]
Jul 20 21:34:00.646: INFO: Created: latency-svc-8qxmh
Jul 20 21:34:00.662: INFO: Got endpoints: latency-svc-8qxmh [975.341835ms]
Jul 20 21:34:00.715: INFO: Created: latency-svc-fc2jc
Jul 20 21:34:00.765: INFO: Got endpoints: latency-svc-fc2jc [1.030269038s]
Jul 20 21:34:00.766: INFO: Created: latency-svc-s5hnw
Jul 20 21:34:00.782: INFO: Got endpoints: latency-svc-s5hnw [941.58709ms]
Jul 20 21:34:00.811: INFO: Created: latency-svc-nprtf
Jul 20 21:34:00.852: INFO: Got endpoints: latency-svc-nprtf [1.002880809s]
Jul 20 21:34:00.882: INFO: Created: latency-svc-87wzd
Jul 20 21:34:00.897: INFO: Got endpoints: latency-svc-87wzd [968.907256ms]
Jul 20 21:34:00.922: INFO: Created: latency-svc-gmb46
Jul 20 21:34:00.939: INFO: Got endpoints: latency-svc-gmb46 [962.556085ms]
Jul 20 21:34:01.008: INFO: Created: latency-svc-xzglg
Jul 20 21:34:01.011: INFO: Got endpoints: latency-svc-xzglg [895.598601ms]
Jul 20 21:34:01.054: INFO: Created: latency-svc-ddlzs
Jul 20 21:34:01.071: INFO: Got endpoints: latency-svc-ddlzs [914.905238ms]
Jul 20 21:34:01.097: INFO: Created: latency-svc-m2swg
Jul 20 21:34:01.140: INFO: Got endpoints: latency-svc-m2swg [935.419006ms]
Jul 20 21:34:01.157: INFO: Created: latency-svc-9t76h
Jul 20 21:34:01.168: INFO: Got endpoints: latency-svc-9t76h [879.866324ms]
Jul 20 21:34:01.194: INFO: Created: latency-svc-nfsgc
Jul 20 21:34:01.204: INFO: Got endpoints: latency-svc-nfsgc [855.223264ms]
Jul 20 21:34:01.229: INFO: Created: latency-svc-hblmt
Jul 20 21:34:01.289: INFO: Got endpoints: latency-svc-hblmt [865.512701ms]
Jul 20 21:34:01.289: INFO: Latencies: [76.750378ms 88.814069ms 121.236933ms 179.039479ms 197.611089ms 228.524037ms 266.140365ms 356.261614ms 426.078667ms 513.970292ms 534.914313ms 567.245585ms 596.897029ms 655.781326ms 717.993568ms 722.071562ms 741.686814ms 751.237823ms 754.076295ms 757.367387ms 759.119841ms 763.569474ms 764.444112ms 767.895098ms 769.292396ms 777.328309ms 778.366367ms 787.182822ms 789.922919ms 791.03097ms 793.244476ms 794.084519ms 798.379835ms 800.078303ms 800.494938ms 801.963926ms 802.318253ms 802.53616ms 805.22242ms 806.701489ms 807.111116ms 807.443585ms 807.879705ms 811.837902ms 812.308754ms 813.787338ms 816.480586ms 817.548396ms 819.557291ms 819.880149ms 820.103012ms 820.780512ms 825.850944ms 826.033384ms 828.528522ms 828.852999ms 831.766136ms 836.052865ms 836.975434ms 837.503618ms 837.621056ms 839.620952ms 839.66776ms 840.776765ms 844.046874ms 844.081827ms 844.20156ms 844.354478ms 846.093793ms 847.953364ms 849.497262ms 849.974248ms 850.151031ms 850.455079ms 851.739429ms 853.511256ms 855.223264ms 855.341364ms 857.486688ms 857.492456ms 857.740484ms 861.4734ms 861.816341ms 862.486638ms 864.09829ms 864.197892ms 865.512701ms 867.738895ms 868.372882ms 869.07834ms 873.285449ms 873.291478ms 873.723152ms 874.170506ms 874.380996ms 875.319446ms 875.944817ms 876.201954ms 877.055973ms 877.531135ms 878.789579ms 879.73812ms 879.866324ms 882.608534ms 882.789449ms 883.262933ms 883.786365ms 884.431243ms 885.273087ms 885.416021ms 885.664668ms 885.778074ms 888.113808ms 889.207665ms 889.775329ms 889.891621ms 890.060775ms 890.659361ms 891.693898ms 892.448676ms 892.884923ms 893.093488ms 894.90906ms 895.568521ms 895.598601ms 896.017888ms 896.324119ms 897.17333ms 897.445514ms 898.094747ms 898.10037ms 898.130671ms 898.540169ms 899.955259ms 904.437662ms 910.58736ms 912.500551ms 914.905238ms 915.097832ms 919.386976ms 921.665136ms 922.25469ms 923.52125ms 924.928488ms 926.923162ms 928.490703ms 930.258777ms 935.419006ms 939.097157ms 939.220286ms 941.084054ms 941.58709ms 948.852934ms 951.291254ms 952.068152ms 960.419596ms 961.541279ms 962.556085ms 964.968436ms 965.664159ms 967.20159ms 968.907256ms 969.032866ms 970.20705ms 975.341835ms 975.499234ms 980.283253ms 981.001501ms 981.152632ms 984.040768ms 984.940332ms 991.114539ms 993.276322ms 999.60868ms 1.002880809s 1.01110019s 1.015983192s 1.016439042s 1.023240535s 1.025672965s 1.030269038s 1.034820052s 1.038208686s 1.044094203s 1.051787421s 1.053917291s 1.055489604s 1.056013464s 1.056210066s 1.068746266s 1.079776959s 1.081214535s 1.083418604s 1.085269667s 1.08792491s 1.089076284s 1.089845782s 1.127764552s 1.172147194s 1.194341348s]
Jul 20 21:34:01.290: INFO: 50 %ile: 878.789579ms
Jul 20 21:34:01.290: INFO: 90 %ile: 1.030269038s
Jul 20 21:34:01.290: INFO: 99 %ile: 1.172147194s
Jul 20 21:34:01.290: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:01.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9588" for this suite.

• [SLOW TEST:16.151 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":143,"skipped":2118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:01.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-9a856e13-13d9-411e-b039-7fa2e13d9dad
STEP: Creating a pod to test consume configMaps
Jul 20 21:34:01.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf" in namespace "configmap-1623" to be "success or failure"
Jul 20 21:34:01.366: INFO: Pod "pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424666ms
Jul 20 21:34:03.369: INFO: Pod "pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006113421s
Jul 20 21:34:05.373: INFO: Pod "pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010138968s
STEP: Saw pod success
Jul 20 21:34:05.373: INFO: Pod "pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf" satisfied condition "success or failure"
Jul 20 21:34:05.376: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf container configmap-volume-test: 
STEP: delete the pod
Jul 20 21:34:05.424: INFO: Waiting for pod pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf to disappear
Jul 20 21:34:05.441: INFO: Pod pod-configmaps-5ba83cbf-fe31-470b-8ff2-3ef5a6d10baf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:05.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1623" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2143,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:05.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 20 21:34:05.592: INFO: Waiting up to 5m0s for pod "pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d" in namespace "emptydir-5718" to be "success or failure"
Jul 20 21:34:05.611: INFO: Pod "pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.562254ms
Jul 20 21:34:07.824: INFO: Pod "pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231272103s
Jul 20 21:34:09.961: INFO: Pod "pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.368245673s
STEP: Saw pod success
Jul 20 21:34:09.961: INFO: Pod "pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d" satisfied condition "success or failure"
Jul 20 21:34:09.978: INFO: Trying to get logs from node jerma-worker pod pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d container test-container: 
STEP: delete the pod
Jul 20 21:34:10.111: INFO: Waiting for pod pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d to disappear
Jul 20 21:34:10.135: INFO: Pod pod-6d44126f-9b7d-4bf9-b909-6f0da1e6fe8d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:10.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5718" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2159,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:10.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul 20 21:34:10.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul 20 21:34:21.634: INFO: >>> kubeConfig: /root/.kube/config
Jul 20 21:34:24.174: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:35.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7284" for this suite.

• [SLOW TEST:25.206 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":146,"skipped":2195,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:35.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Jul 20 21:34:35.515: INFO: Waiting up to 5m0s for pod "client-containers-d440eefd-ef9d-4596-9361-406472299064" in namespace "containers-2714" to be "success or failure"
Jul 20 21:34:35.535: INFO: Pod "client-containers-d440eefd-ef9d-4596-9361-406472299064": Phase="Pending", Reason="", readiness=false. Elapsed: 20.55732ms
Jul 20 21:34:37.565: INFO: Pod "client-containers-d440eefd-ef9d-4596-9361-406472299064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050316986s
Jul 20 21:34:39.569: INFO: Pod "client-containers-d440eefd-ef9d-4596-9361-406472299064": Phase="Running", Reason="", readiness=true. Elapsed: 4.053927182s
Jul 20 21:34:41.573: INFO: Pod "client-containers-d440eefd-ef9d-4596-9361-406472299064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058147119s
STEP: Saw pod success
Jul 20 21:34:41.573: INFO: Pod "client-containers-d440eefd-ef9d-4596-9361-406472299064" satisfied condition "success or failure"
Jul 20 21:34:41.576: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d440eefd-ef9d-4596-9361-406472299064 container test-container: 
STEP: delete the pod
Jul 20 21:34:41.599: INFO: Waiting for pod client-containers-d440eefd-ef9d-4596-9361-406472299064 to disappear
Jul 20 21:34:41.615: INFO: Pod client-containers-d440eefd-ef9d-4596-9361-406472299064 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:41.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2714" for this suite.

• [SLOW TEST:6.234 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2213,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:41.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 20 21:34:48.564: INFO: 9 pods remaining
Jul 20 21:34:48.564: INFO: 0 pods has nil DeletionTimestamp
Jul 20 21:34:48.564: INFO: 
Jul 20 21:34:50.985: INFO: 0 pods remaining
Jul 20 21:34:50.986: INFO: 0 pods has nil DeletionTimestamp
Jul 20 21:34:50.986: INFO: 
STEP: Gathering metrics
W0720 21:34:52.773049       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 21:34:52.773: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3853" for this suite.

• [SLOW TEST:11.332 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":148,"skipped":2227,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:52.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-68b9504d-854e-4d1c-9622-630cc322dcc8
STEP: Creating a pod to test consume configMaps
Jul 20 21:34:53.714: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de" in namespace "projected-8352" to be "success or failure"
Jul 20 21:34:53.823: INFO: Pod "pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de": Phase="Pending", Reason="", readiness=false. Elapsed: 108.332598ms
Jul 20 21:34:55.841: INFO: Pod "pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126564648s
Jul 20 21:34:57.845: INFO: Pod "pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130981138s
STEP: Saw pod success
Jul 20 21:34:57.845: INFO: Pod "pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de" satisfied condition "success or failure"
Jul 20 21:34:57.848: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 21:34:57.904: INFO: Waiting for pod pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de to disappear
Jul 20 21:34:57.909: INFO: Pod pod-projected-configmaps-2c2aeb05-4283-473f-8ae3-36730ffa41de no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:34:57.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8352" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2269,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:34:57.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-6396/configmap-test-adda6046-e94f-49cc-9ca3-b4eedc1f1997
STEP: Creating a pod to test consume configMaps
Jul 20 21:34:58.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930" in namespace "configmap-6396" to be "success or failure"
Jul 20 21:34:58.017: INFO: Pod "pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930": Phase="Pending", Reason="", readiness=false. Elapsed: 10.253417ms
Jul 20 21:35:00.021: INFO: Pod "pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014170128s
Jul 20 21:35:02.025: INFO: Pod "pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017916031s
STEP: Saw pod success
Jul 20 21:35:02.025: INFO: Pod "pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930" satisfied condition "success or failure"
Jul 20 21:35:02.028: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930 container env-test: 
STEP: delete the pod
Jul 20 21:35:02.100: INFO: Waiting for pod pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930 to disappear
Jul 20 21:35:02.114: INFO: Pod pod-configmaps-b8633be3-28c0-4618-abfd-4a8bbbe42930 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:02.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6396" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2271,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:02.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:02.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7240" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":151,"skipped":2281,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:02.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:35:02.412: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 20 21:35:07.430: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 20 21:35:07.430: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 20 21:35:09.434: INFO: Creating deployment "test-rollover-deployment"
Jul 20 21:35:09.442: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 20 21:35:11.449: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 20 21:35:11.455: INFO: Ensure that both replica sets have 1 created replica
Jul 20 21:35:11.461: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 20 21:35:11.467: INFO: Updating deployment test-rollover-deployment
Jul 20 21:35:11.467: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 20 21:35:13.476: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 20 21:35:13.481: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 20 21:35:13.487: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:13.487: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877711, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:15.495: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:15.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877714, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:17.494: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:17.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877714, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:19.495: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:19.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877714, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:21.495: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:21.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877714, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:23.495: INFO: all replica sets need to contain the pod-template-hash label
Jul 20 21:35:23.495: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877714, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877709, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:35:25.496: INFO: 
Jul 20 21:35:25.496: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul 20 21:35:25.503: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2383 /apis/apps/v1/namespaces/deployment-2383/deployments/test-rollover-deployment 532f6894-99b0-48d4-ad70-72e637d5726d 2875031 2 2020-07-20 21:35:09 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037a8558  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-20 21:35:09 +0000 UTC,LastTransitionTime:2020-07-20 21:35:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-07-20 21:35:24 +0000 UTC,LastTransitionTime:2020-07-20 21:35:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 20 21:35:25.505: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-2383 /apis/apps/v1/namespaces/deployment-2383/replicasets/test-rollover-deployment-574d6dfbff 92751b77-6900-4374-8930-7fc508c7be00 2875020 2 2020-07-20 21:35:11 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 532f6894-99b0-48d4-ad70-72e637d5726d 0xc00441ecc7 0xc00441ecc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00441ed38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 20 21:35:25.505: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 20 21:35:25.505: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2383 /apis/apps/v1/namespaces/deployment-2383/replicasets/test-rollover-controller fe4bd053-dda0-4a13-83aa-71f22fa0f001 2875029 2 2020-07-20 21:35:02 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 532f6894-99b0-48d4-ad70-72e637d5726d 0xc00441eb0f 0xc00441eb20}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00441ebd8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 21:35:25.505: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2383 /apis/apps/v1/namespaces/deployment-2383/replicasets/test-rollover-deployment-f6c94f66c ec4006e1-ffa2-4415-ab84-e1da8c7c1f16 2874974 2 2020-07-20 21:35:09 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 532f6894-99b0-48d4-ad70-72e637d5726d 0xc00441edc0 0xc00441edc1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00441ee58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 21:35:25.508: INFO: Pod "test-rollover-deployment-574d6dfbff-p4sbh" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-p4sbh test-rollover-deployment-574d6dfbff- deployment-2383 /api/v1/namespaces/deployment-2383/pods/test-rollover-deployment-574d6dfbff-p4sbh 58268ee0-a1a4-4577-88b2-c56b57bf50d6 2874988 0 2020-07-20 21:35:11 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 92751b77-6900-4374-8930-7fc508c7be00 0xc0037a8d47 0xc0037a8d48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jz9p9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jz9p9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jz9p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:35:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:35:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:35:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:35:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.89,StartTime:2020-07-20 21:35:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 21:35:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c0b058f4bbde7b3c2854dc24688a20be17e0b56b99c96a81ed2fbb97818d6c6b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.89,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:25.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2383" for this suite.

• [SLOW TEST:23.270 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":152,"skipped":2286,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:25.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-71382983-dd9a-4952-a98d-87a2eae8d768
STEP: Creating secret with name s-test-opt-upd-c6dd779f-8e39-4cd7-a150-8083177e9742
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-71382983-dd9a-4952-a98d-87a2eae8d768
STEP: Updating secret s-test-opt-upd-c6dd779f-8e39-4cd7-a150-8083177e9742
STEP: Creating secret with name s-test-opt-create-849dc72a-5d9f-4d57-937f-22abac0a5c15
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:33.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7280" for this suite.

• [SLOW TEST:8.285 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2307,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:33.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 21:35:33.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-57'
Jul 20 21:35:33.965: INFO: stderr: ""
Jul 20 21:35:33.965: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul 20 21:35:39.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-57 -o json'
Jul 20 21:35:39.118: INFO: stderr: ""
Jul 20 21:35:39.118: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-20T21:35:33Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-57\",\n        \"resourceVersion\": \"2875141\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-57/pods/e2e-test-httpd-pod\",\n        \"uid\": \"b52125c0-6a94-4485-a8ae-c10d40e17608\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-45pzm\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-45pzm\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-45pzm\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T21:35:33Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T21:35:37Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T21:35:37Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-20T21:35:33Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://0657a0c85a595f2fcf238480a64d2540e4363d37e90ff9018d5019e40cbd7cff\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-20T21:35:36Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.10\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.90\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.90\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-20T21:35:33Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul 20 21:35:39.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-57'
Jul 20 21:35:39.659: INFO: stderr: ""
Jul 20 21:35:39.659: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Jul 20 21:35:39.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-57'
Jul 20 21:35:47.348: INFO: stderr: ""
Jul 20 21:35:47.348: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:47.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-57" for this suite.

• [SLOW TEST:13.627 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":154,"skipped":2311,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:47.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-f8c6e6ec-a9a3-424a-a145-ab8e2fc1bf85
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f8c6e6ec-a9a3-424a-a145-ab8e2fc1bf85
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:35:53.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9967" for this suite.

• [SLOW TEST:6.206 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2406,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:35:53.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-secret-9lhs
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 21:35:53.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9lhs" in namespace "subpath-9378" to be "success or failure"
Jul 20 21:35:53.782: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Pending", Reason="", readiness=false. Elapsed: 36.362419ms
Jul 20 21:35:55.786: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040368813s
Jul 20 21:35:57.790: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 4.044282997s
Jul 20 21:35:59.795: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 6.049075401s
Jul 20 21:36:01.799: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 8.053361871s
Jul 20 21:36:03.803: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 10.057461525s
Jul 20 21:36:05.807: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 12.061139526s
Jul 20 21:36:07.811: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 14.065382387s
Jul 20 21:36:09.816: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 16.069909958s
Jul 20 21:36:11.819: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 18.073487352s
Jul 20 21:36:13.823: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 20.07769998s
Jul 20 21:36:15.827: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Running", Reason="", readiness=true. Elapsed: 22.081552857s
Jul 20 21:36:17.831: INFO: Pod "pod-subpath-test-secret-9lhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.085471303s
STEP: Saw pod success
Jul 20 21:36:17.831: INFO: Pod "pod-subpath-test-secret-9lhs" satisfied condition "success or failure"
Jul 20 21:36:17.834: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-9lhs container test-container-subpath-secret-9lhs: 
STEP: delete the pod
Jul 20 21:36:17.945: INFO: Waiting for pod pod-subpath-test-secret-9lhs to disappear
Jul 20 21:36:18.032: INFO: Pod pod-subpath-test-secret-9lhs no longer exists
STEP: Deleting pod pod-subpath-test-secret-9lhs
Jul 20 21:36:18.032: INFO: Deleting pod "pod-subpath-test-secret-9lhs" in namespace "subpath-9378"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:36:18.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9378" for this suite.

• [SLOW TEST:24.507 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":156,"skipped":2433,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:36:18.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 20 21:36:22.725: INFO: Successfully updated pod "pod-update-63bebe1a-ae67-42db-803b-e7f9704388db"
STEP: verifying the updated pod is in kubernetes
Jul 20 21:36:22.747: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:36:22.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4838" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2438,"failed":0}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:36:22.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6609
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 21:36:22.850: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 21:36:47.122: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=http&host=10.244.2.134&port=8080&tries=1'] Namespace:pod-network-test-6609 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:36:47.122: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:36:47.158723       6 log.go:172] (0xc001dec160) (0xc000f9c8c0) Create stream
I0720 21:36:47.158758       6 log.go:172] (0xc001dec160) (0xc000f9c8c0) Stream added, broadcasting: 1
I0720 21:36:47.160858       6 log.go:172] (0xc001dec160) Reply frame received for 1
I0720 21:36:47.160914       6 log.go:172] (0xc001dec160) (0xc000f9cfa0) Create stream
I0720 21:36:47.160963       6 log.go:172] (0xc001dec160) (0xc000f9cfa0) Stream added, broadcasting: 3
I0720 21:36:47.162149       6 log.go:172] (0xc001dec160) Reply frame received for 3
I0720 21:36:47.162239       6 log.go:172] (0xc001dec160) (0xc001f1c500) Create stream
I0720 21:36:47.162266       6 log.go:172] (0xc001dec160) (0xc001f1c500) Stream added, broadcasting: 5
I0720 21:36:47.163388       6 log.go:172] (0xc001dec160) Reply frame received for 5
I0720 21:36:47.227492       6 log.go:172] (0xc001dec160) Data frame received for 3
I0720 21:36:47.227545       6 log.go:172] (0xc000f9cfa0) (3) Data frame handling
I0720 21:36:47.227570       6 log.go:172] (0xc000f9cfa0) (3) Data frame sent
I0720 21:36:47.228306       6 log.go:172] (0xc001dec160) Data frame received for 5
I0720 21:36:47.228338       6 log.go:172] (0xc001f1c500) (5) Data frame handling
I0720 21:36:47.228358       6 log.go:172] (0xc001dec160) Data frame received for 3
I0720 21:36:47.228368       6 log.go:172] (0xc000f9cfa0) (3) Data frame handling
I0720 21:36:47.230168       6 log.go:172] (0xc001dec160) Data frame received for 1
I0720 21:36:47.230186       6 log.go:172] (0xc000f9c8c0) (1) Data frame handling
I0720 21:36:47.230195       6 log.go:172] (0xc000f9c8c0) (1) Data frame sent
I0720 21:36:47.230205       6 log.go:172] (0xc001dec160) (0xc000f9c8c0) Stream removed, broadcasting: 1
I0720 21:36:47.230251       6 log.go:172] (0xc001dec160) Go away received
I0720 21:36:47.230287       6 log.go:172] (0xc001dec160) (0xc000f9c8c0) Stream removed, broadcasting: 1
I0720 21:36:47.230306       6 log.go:172] (0xc001dec160) (0xc000f9cfa0) Stream removed, broadcasting: 3
I0720 21:36:47.230314       6 log.go:172] (0xc001dec160) (0xc001f1c500) Stream removed, broadcasting: 5
Jul 20 21:36:47.230: INFO: Waiting for responses: map[]
Jul 20 21:36:47.233: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.135:8080/dial?request=hostname&protocol=http&host=10.244.1.92&port=8080&tries=1'] Namespace:pod-network-test-6609 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:36:47.234: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:36:47.265623       6 log.go:172] (0xc00282c000) (0xc002716960) Create stream
I0720 21:36:47.265649       6 log.go:172] (0xc00282c000) (0xc002716960) Stream added, broadcasting: 1
I0720 21:36:47.267372       6 log.go:172] (0xc00282c000) Reply frame received for 1
I0720 21:36:47.267405       6 log.go:172] (0xc00282c000) (0xc001f1c640) Create stream
I0720 21:36:47.267417       6 log.go:172] (0xc00282c000) (0xc001f1c640) Stream added, broadcasting: 3
I0720 21:36:47.268396       6 log.go:172] (0xc00282c000) Reply frame received for 3
I0720 21:36:47.268434       6 log.go:172] (0xc00282c000) (0xc001f1c6e0) Create stream
I0720 21:36:47.268451       6 log.go:172] (0xc00282c000) (0xc001f1c6e0) Stream added, broadcasting: 5
I0720 21:36:47.269584       6 log.go:172] (0xc00282c000) Reply frame received for 5
I0720 21:36:47.331784       6 log.go:172] (0xc00282c000) Data frame received for 3
I0720 21:36:47.331831       6 log.go:172] (0xc001f1c640) (3) Data frame handling
I0720 21:36:47.331859       6 log.go:172] (0xc001f1c640) (3) Data frame sent
I0720 21:36:47.332527       6 log.go:172] (0xc00282c000) Data frame received for 5
I0720 21:36:47.332548       6 log.go:172] (0xc001f1c6e0) (5) Data frame handling
I0720 21:36:47.332912       6 log.go:172] (0xc00282c000) Data frame received for 3
I0720 21:36:47.332940       6 log.go:172] (0xc001f1c640) (3) Data frame handling
I0720 21:36:47.334535       6 log.go:172] (0xc00282c000) Data frame received for 1
I0720 21:36:47.334566       6 log.go:172] (0xc002716960) (1) Data frame handling
I0720 21:36:47.334591       6 log.go:172] (0xc002716960) (1) Data frame sent
I0720 21:36:47.334609       6 log.go:172] (0xc00282c000) (0xc002716960) Stream removed, broadcasting: 1
I0720 21:36:47.334627       6 log.go:172] (0xc00282c000) Go away received
I0720 21:36:47.334749       6 log.go:172] (0xc00282c000) (0xc002716960) Stream removed, broadcasting: 1
I0720 21:36:47.334764       6 log.go:172] (0xc00282c000) (0xc001f1c640) Stream removed, broadcasting: 3
I0720 21:36:47.334773       6 log.go:172] (0xc00282c000) (0xc001f1c6e0) Stream removed, broadcasting: 5
Jul 20 21:36:47.334: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:36:47.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6609" for this suite.

• [SLOW TEST:24.578 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2440,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:36:47.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-d5e09a8e-0859-4b62-ad3d-4c34b2589071
STEP: Creating a pod to test consume secrets
Jul 20 21:36:47.514: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274" in namespace "projected-3913" to be "success or failure"
Jul 20 21:36:47.517: INFO: Pod "pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274": Phase="Pending", Reason="", readiness=false. Elapsed: 3.304094ms
Jul 20 21:36:49.521: INFO: Pod "pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006939887s
Jul 20 21:36:51.549: INFO: Pod "pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035484458s
STEP: Saw pod success
Jul 20 21:36:51.549: INFO: Pod "pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274" satisfied condition "success or failure"
Jul 20 21:36:51.552: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274 container secret-volume-test: 
STEP: delete the pod
Jul 20 21:36:51.585: INFO: Waiting for pod pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274 to disappear
Jul 20 21:36:51.589: INFO: Pod pod-projected-secrets-39734983-44c2-4a59-82a6-36c92833e274 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:36:51.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3913" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2446,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:36:51.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:36:51.695: INFO: Creating deployment "test-recreate-deployment"
Jul 20 21:36:51.699: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 20 21:36:51.772: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul 20 21:36:53.900: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 20 21:36:54.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877811, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877811, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877811, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877811, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:36:56.128: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 20 21:36:56.135: INFO: Updating deployment test-recreate-deployment
Jul 20 21:36:56.135: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul 20 21:36:56.714: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1788 /apis/apps/v1/namespaces/deployment-1788/deployments/test-recreate-deployment 5582b154-a250-41eb-912d-7680b869049a 2875622 2 2020-07-20 21:36:51 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0029c3748  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 21:36:56 +0000 UTC,LastTransitionTime:2020-07-20 21:36:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-07-20 21:36:56 +0000 UTC,LastTransitionTime:2020-07-20 21:36:51 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul 20 21:36:56.718: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-1788 /apis/apps/v1/namespaces/deployment-1788/replicasets/test-recreate-deployment-5f94c574ff c0ce4e1e-4bc2-4006-aca5-39a19ddf2abb 2875621 1 2020-07-20 21:36:56 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5582b154-a250-41eb-912d-7680b869049a 0xc0052f4fc7 0xc0052f4fc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052f5028  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 21:36:56.718: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 20 21:36:56.718: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-1788 /apis/apps/v1/namespaces/deployment-1788/replicasets/test-recreate-deployment-799c574856 3cff09ca-99d5-467f-b0cd-f8645927b8e7 2875610 2 2020-07-20 21:36:51 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5582b154-a250-41eb-912d-7680b869049a 0xc0052f5097 0xc0052f5098}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0052f5108  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 21:36:56.820: INFO: Pod "test-recreate-deployment-5f94c574ff-fmlm7" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-fmlm7 test-recreate-deployment-5f94c574ff- deployment-1788 /api/v1/namespaces/deployment-1788/pods/test-recreate-deployment-5f94c574ff-fmlm7 b2170d61-9882-4c6b-aa56-50ad1ad1b731 2875623 0 2020-07-20 21:36:56 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c0ce4e1e-4bc2-4006-aca5-39a19ddf2abb 0xc0029c3ad7 0xc0029c3ad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4n8ct,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4n8ct,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4n8ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:36:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:36:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:36:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 21:36:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-20 21:36:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:36:56.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1788" for this suite.

• [SLOW TEST:5.232 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":160,"skipped":2455,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:36:56.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 20 21:37:01.551: INFO: Successfully updated pod "annotationupdateb7bfe1a7-c4b7-40dc-b29c-7b27f03784d7"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:37:03.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8020" for this suite.

• [SLOW TEST:6.765 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2477,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:37:03.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-f15b8e5d-c052-45db-a809-b07eedc3bdc4
STEP: Creating a pod to test consume configMaps
Jul 20 21:37:03.717: INFO: Waiting up to 5m0s for pod "pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364" in namespace "configmap-1454" to be "success or failure"
Jul 20 21:37:03.777: INFO: Pod "pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364": Phase="Pending", Reason="", readiness=false. Elapsed: 59.387322ms
Jul 20 21:37:05.780: INFO: Pod "pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062918524s
Jul 20 21:37:07.784: INFO: Pod "pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06686238s
STEP: Saw pod success
Jul 20 21:37:07.784: INFO: Pod "pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364" satisfied condition "success or failure"
Jul 20 21:37:07.788: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364 container configmap-volume-test: 
STEP: delete the pod
Jul 20 21:37:07.813: INFO: Waiting for pod pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364 to disappear
Jul 20 21:37:07.817: INFO: Pod pod-configmaps-8be74583-1677-4a05-8ba4-74194f7b6364 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:37:07.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1454" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2531,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:37:07.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:37:07.976: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:37:14.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-897" for this suite.

• [SLOW TEST:6.489 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":163,"skipped":2538,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:37:14.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Jul 20 21:37:18.986: INFO: Successfully updated pod "annotationupdatea8815fad-c8b7-4319-a11a-68e7cb9fc259"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:37:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8226" for this suite.

• [SLOW TEST:6.784 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2564,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:37:21.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0720 21:38:01.487976       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 21:38:01.488: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:01.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9486" for this suite.

• [SLOW TEST:40.397 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":165,"skipped":2583,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:01.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:38:02.356: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:38:04.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:38:06.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877882, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:38:10.339: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:22.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7593" for this suite.
STEP: Destroying namespace "webhook-7593-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:21.210 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":166,"skipped":2593,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:22.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:38:24.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:38:26.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877904, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877904, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877904, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730877904, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:38:29.652: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:38:29.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6266-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6375" for this suite.
STEP: Destroying namespace "webhook-6375-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.257 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":167,"skipped":2630,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:30.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:38:31.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9655'
Jul 20 21:38:31.485: INFO: stderr: ""
Jul 20 21:38:31.485: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jul 20 21:38:31.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9655'
Jul 20 21:38:31.990: INFO: stderr: ""
Jul 20 21:38:31.990: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 20 21:38:32.995: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 21:38:32.995: INFO: Found 0 / 1
Jul 20 21:38:34.293: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 21:38:34.293: INFO: Found 0 / 1
Jul 20 21:38:34.995: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 21:38:34.995: INFO: Found 0 / 1
Jul 20 21:38:35.994: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 21:38:35.994: INFO: Found 1 / 1
Jul 20 21:38:35.994: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 20 21:38:35.997: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 21:38:35.997: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 20 21:38:35.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-gp8d6 --namespace=kubectl-9655'
Jul 20 21:38:36.115: INFO: stderr: ""
Jul 20 21:38:36.115: INFO: stdout: "Name:         agnhost-master-gp8d6\nNamespace:    kubectl-9655\nPriority:     0\nNode:         jerma-worker/172.18.0.6\nStart Time:   Mon, 20 Jul 2020 21:38:31 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.145\nIPs:\n  IP:           10.244.2.145\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://abc57366d3845bccbdb6ee3de198291ae6e1e0f98efc6235578beb45f41a4935\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 20 Jul 2020 21:38:35 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z75nx (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-z75nx:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-z75nx\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  5s    default-scheduler      Successfully assigned kubectl-9655/agnhost-master-gp8d6 to jerma-worker\n  Normal  Pulled     4s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, jerma-worker  Started container agnhost-master\n"
Jul 20 21:38:36.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9655'
Jul 20 21:38:36.232: INFO: stderr: ""
Jul 20 21:38:36.232: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-9655\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-gp8d6\n"
Jul 20 21:38:36.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9655'
Jul 20 21:38:36.337: INFO: stderr: ""
Jul 20 21:38:36.337: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-9655\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.153.31\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.145:6379\nSession Affinity:  None\nEvents:            \n"
Jul 20 21:38:36.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Jul 20 21:38:36.452: INFO: stderr: ""
Jul 20 21:38:36.452: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:25:55 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Mon, 20 Jul 2020 21:38:29 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 20 Jul 2020 21:33:55 +0000   Fri, 10 Jul 2020 10:25:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 20 Jul 2020 21:33:55 +0000   Fri, 10 Jul 2020 10:25:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 20 Jul 2020 21:33:55 +0000   Fri, 10 Jul 2020 10:25:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 20 Jul 2020 21:33:55 +0000   Fri, 10 Jul 2020 10:26:30 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.3\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759872Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 78cb62e1bd20401ebc9a91779e3da282\n  System UUID:                5fa8becb-168a-4d58-8252-a288ac7a8260\n  Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n  Kernel Version:             4.15.0-109-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-9rqh9                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     10d\n  kube-system                 coredns-6955765f44-bq97f                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     10d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kindnet-b87md                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      10d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-proxy-svrlv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         10d\n  local-path-storage          local-path-provisioner-58f6947c7-rkzsd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul 20 21:38:36.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9655'
Jul 20 21:38:36.594: INFO: stderr: ""
Jul 20 21:38:36.594: INFO: stdout: "Name:         kubectl-9655\nLabels:       e2e-framework=kubectl\n              e2e-run=028a60c4-aca1-4b1c-a4d1-f6b0cd25560b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:36.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9655" for this suite.

• [SLOW TEST:5.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":168,"skipped":2683,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:36.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul 20 21:38:36.717: INFO: Waiting up to 5m0s for pod "downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae" in namespace "downward-api-5209" to be "success or failure"
Jul 20 21:38:36.721: INFO: Pod "downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078394ms
Jul 20 21:38:38.803: INFO: Pod "downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086256092s
Jul 20 21:38:40.970: INFO: Pod "downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.253104407s
STEP: Saw pod success
Jul 20 21:38:40.970: INFO: Pod "downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae" satisfied condition "success or failure"
Jul 20 21:38:40.973: INFO: Trying to get logs from node jerma-worker2 pod downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae container dapi-container: 
STEP: delete the pod
Jul 20 21:38:41.056: INFO: Waiting for pod downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae to disappear
Jul 20 21:38:41.060: INFO: Pod downward-api-581fc562-ae74-4b1f-8686-2f17786b93ae no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:41.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5209" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:41.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Jul 20 21:38:41.175: INFO: Waiting up to 5m0s for pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e" in namespace "var-expansion-4401" to be "success or failure"
Jul 20 21:38:41.178: INFO: Pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.871019ms
Jul 20 21:38:43.221: INFO: Pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046501918s
Jul 20 21:38:45.226: INFO: Pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e": Phase="Running", Reason="", readiness=true. Elapsed: 4.05088172s
Jul 20 21:38:47.229: INFO: Pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054096894s
STEP: Saw pod success
Jul 20 21:38:47.229: INFO: Pod "var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e" satisfied condition "success or failure"
Jul 20 21:38:47.231: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e container dapi-container: 
STEP: delete the pod
Jul 20 21:38:47.251: INFO: Waiting for pod var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e to disappear
Jul 20 21:38:47.256: INFO: Pod var-expansion-4a2fa3c5-2adb-41ce-baf6-9b05a178841e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:47.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4401" for this suite.

• [SLOW TEST:6.194 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2716,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:47.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 20 21:38:47.398: INFO: Waiting up to 5m0s for pod "pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04" in namespace "emptydir-7822" to be "success or failure"
Jul 20 21:38:47.406: INFO: Pod "pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04": Phase="Pending", Reason="", readiness=false. Elapsed: 7.39391ms
Jul 20 21:38:49.533: INFO: Pod "pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134901429s
Jul 20 21:38:51.537: INFO: Pod "pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138467482s
STEP: Saw pod success
Jul 20 21:38:51.537: INFO: Pod "pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04" satisfied condition "success or failure"
Jul 20 21:38:51.540: INFO: Trying to get logs from node jerma-worker2 pod pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04 container test-container: 
STEP: delete the pod
Jul 20 21:38:51.599: INFO: Waiting for pod pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04 to disappear
Jul 20 21:38:51.615: INFO: Pod pod-b91ebd78-c60a-4ea1-9be4-dce35bbe8b04 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:51.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7822" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2728,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:51.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 20 21:38:54.821: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:54.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2792" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2736,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:54.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test env composition
Jul 20 21:38:55.249: INFO: Waiting up to 5m0s for pod "var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b" in namespace "var-expansion-6112" to be "success or failure"
Jul 20 21:38:55.263: INFO: Pod "var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.343667ms
Jul 20 21:38:57.266: INFO: Pod "var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017697559s
Jul 20 21:38:59.270: INFO: Pod "var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02179308s
STEP: Saw pod success
Jul 20 21:38:59.270: INFO: Pod "var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b" satisfied condition "success or failure"
Jul 20 21:38:59.273: INFO: Trying to get logs from node jerma-worker pod var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b container dapi-container: 
STEP: delete the pod
Jul 20 21:38:59.308: INFO: Waiting for pod var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b to disappear
Jul 20 21:38:59.325: INFO: Pod var-expansion-0f6c8ea7-7a14-4e20-910c-3a2d939aa22b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:38:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6112" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2767,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:38:59.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-84ef84f9-e7fc-4152-8427-10c15ada976b
STEP: Creating a pod to test consume secrets
Jul 20 21:38:59.428: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a" in namespace "projected-9842" to be "success or failure"
Jul 20 21:38:59.443: INFO: Pod "pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.472719ms
Jul 20 21:39:01.448: INFO: Pod "pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019704023s
Jul 20 21:39:03.451: INFO: Pod "pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023384311s
STEP: Saw pod success
Jul 20 21:39:03.451: INFO: Pod "pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a" satisfied condition "success or failure"
Jul 20 21:39:03.454: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 21:39:03.468: INFO: Waiting for pod pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a to disappear
Jul 20 21:39:03.495: INFO: Pod pod-projected-secrets-0faca38e-2adc-4668-836f-f46b437fb26a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:39:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9842" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:39:03.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-4843c491-52f1-4448-983c-417c97d98697 in namespace container-probe-364
Jul 20 21:39:07.614: INFO: Started pod busybox-4843c491-52f1-4448-983c-417c97d98697 in namespace container-probe-364
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 21:39:07.617: INFO: Initial restart count of pod busybox-4843c491-52f1-4448-983c-417c97d98697 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:43:08.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-364" for this suite.

• [SLOW TEST:244.856 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2798,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:43:08.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:43:08.467: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba" in namespace "security-context-test-4666" to be "success or failure"
Jul 20 21:43:08.495: INFO: Pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba": Phase="Pending", Reason="", readiness=false. Elapsed: 27.846248ms
Jul 20 21:43:10.614: INFO: Pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147236369s
Jul 20 21:43:12.619: INFO: Pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba": Phase="Running", Reason="", readiness=true. Elapsed: 4.151585074s
Jul 20 21:43:14.623: INFO: Pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155831912s
Jul 20 21:43:14.623: INFO: Pod "alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:43:14.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4666" for this suite.

• [SLOW TEST:6.292 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2810,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:43:14.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Jul 20 21:43:14.731: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 20 21:43:14.752: INFO: Waiting for terminating namespaces to be deleted...
Jul 20 21:43:14.755: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Jul 20 21:43:14.775: INFO: kube-proxy-2ssxj from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:43:14.775: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 20 21:43:14.775: INFO: kindnet-bqk7h from kube-system started at 2020-07-10 10:26:33 +0000 UTC (1 container statuses recorded)
Jul 20 21:43:14.775: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 21:43:14.775: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Jul 20 21:43:14.781: INFO: kindnet-klj8h from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:43:14.781: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 20 21:43:14.781: INFO: alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba from security-context-test-4666 started at 2020-07-20 21:43:08 +0000 UTC (1 container statuses recorded)
Jul 20 21:43:14.781: INFO: 	Container alpine-nnp-false-4d6d3a0d-b46b-4a9d-80cf-34343d00c9ba ready: false, restart count 0
Jul 20 21:43:14.781: INFO: kube-proxy-67jwf from kube-system started at 2020-07-10 10:26:32 +0000 UTC (1 container statuses recorded)
Jul 20 21:43:14.781: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-97afd4f1-d0e0-49f9-b0e5-99c93315b49b 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-97afd4f1-d0e0-49f9-b0e5-99c93315b49b off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-97afd4f1-d0e0-49f9-b0e5-99c93315b49b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:43:31.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8396" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.557 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":177,"skipped":2818,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:43:31.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 21:43:31.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9455'
Jul 20 21:43:34.355: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 20 21:43:34.355: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jul 20 21:43:34.378: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-wkrtr]
Jul 20 21:43:34.378: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-wkrtr" in namespace "kubectl-9455" to be "running and ready"
Jul 20 21:43:34.427: INFO: Pod "e2e-test-httpd-rc-wkrtr": Phase="Pending", Reason="", readiness=false. Elapsed: 49.161737ms
Jul 20 21:43:36.501: INFO: Pod "e2e-test-httpd-rc-wkrtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123507657s
Jul 20 21:43:38.505: INFO: Pod "e2e-test-httpd-rc-wkrtr": Phase="Running", Reason="", readiness=true. Elapsed: 4.126784531s
Jul 20 21:43:38.505: INFO: Pod "e2e-test-httpd-rc-wkrtr" satisfied condition "running and ready"
Jul 20 21:43:38.505: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-wkrtr]
Jul 20 21:43:38.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-9455'
Jul 20 21:43:38.627: INFO: stderr: ""
Jul 20 21:43:38.627: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.107. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.107. Set the 'ServerName' directive globally to suppress this message\n[Mon Jul 20 21:43:37.182009 2020] [mpm_event:notice] [pid 1:tid 139747457239912] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Jul 20 21:43:37.182081 2020] [core:notice] [pid 1:tid 139747457239912] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530
Jul 20 21:43:38.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9455'
Jul 20 21:43:38.782: INFO: stderr: ""
Jul 20 21:43:38.782: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:43:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9455" for this suite.

• [SLOW TEST:7.578 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":178,"skipped":2837,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:43:38.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 20 21:43:39.146: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877605 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 21:43:39.146: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877605 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 20 21:43:49.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877658 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 20 21:43:49.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877658 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 20 21:43:59.191: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877691 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 21:43:59.191: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877691 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 20 21:44:09.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877721 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 21:44:09.197: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-a 6238c315-4e9d-4311-a51d-3f989700ef88 2877721 0 2020-07-20 21:43:39 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 20 21:44:19.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-b 383fbf12-0992-484b-9fac-929d699cfff9 2877751 0 2020-07-20 21:44:19 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 21:44:19.222: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-b 383fbf12-0992-484b-9fac-929d699cfff9 2877751 0 2020-07-20 21:44:19 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 20 21:44:29.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-b 383fbf12-0992-484b-9fac-929d699cfff9 2877779 0 2020-07-20 21:44:19 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 21:44:29.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7947 /api/v1/namespaces/watch-7947/configmaps/e2e-watch-test-configmap-b 383fbf12-0992-484b-9fac-929d699cfff9 2877779 0 2020-07-20 21:44:19 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:44:39.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7947" for this suite.

• [SLOW TEST:60.490 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":179,"skipped":2850,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:44:39.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2296
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 21:44:39.339: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 21:45:01.545: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.153:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2296 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:45:01.545: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:45:01.574407       6 log.go:172] (0xc001dec370) (0xc001c84be0) Create stream
I0720 21:45:01.574487       6 log.go:172] (0xc001dec370) (0xc001c84be0) Stream added, broadcasting: 1
I0720 21:45:01.576271       6 log.go:172] (0xc001dec370) Reply frame received for 1
I0720 21:45:01.576305       6 log.go:172] (0xc001dec370) (0xc002328500) Create stream
I0720 21:45:01.576315       6 log.go:172] (0xc001dec370) (0xc002328500) Stream added, broadcasting: 3
I0720 21:45:01.577495       6 log.go:172] (0xc001dec370) Reply frame received for 3
I0720 21:45:01.577557       6 log.go:172] (0xc001dec370) (0xc0023285a0) Create stream
I0720 21:45:01.577573       6 log.go:172] (0xc001dec370) (0xc0023285a0) Stream added, broadcasting: 5
I0720 21:45:01.578417       6 log.go:172] (0xc001dec370) Reply frame received for 5
I0720 21:45:01.635399       6 log.go:172] (0xc001dec370) Data frame received for 3
I0720 21:45:01.635435       6 log.go:172] (0xc002328500) (3) Data frame handling
I0720 21:45:01.635455       6 log.go:172] (0xc002328500) (3) Data frame sent
I0720 21:45:01.635475       6 log.go:172] (0xc001dec370) Data frame received for 3
I0720 21:45:01.635489       6 log.go:172] (0xc002328500) (3) Data frame handling
I0720 21:45:01.635624       6 log.go:172] (0xc001dec370) Data frame received for 5
I0720 21:45:01.635659       6 log.go:172] (0xc0023285a0) (5) Data frame handling
I0720 21:45:01.638044       6 log.go:172] (0xc001dec370) Data frame received for 1
I0720 21:45:01.638070       6 log.go:172] (0xc001c84be0) (1) Data frame handling
I0720 21:45:01.638095       6 log.go:172] (0xc001c84be0) (1) Data frame sent
I0720 21:45:01.638121       6 log.go:172] (0xc001dec370) (0xc001c84be0) Stream removed, broadcasting: 1
I0720 21:45:01.638146       6 log.go:172] (0xc001dec370) Go away received
I0720 21:45:01.638301       6 log.go:172] (0xc001dec370) (0xc001c84be0) Stream removed, broadcasting: 1
I0720 21:45:01.638330       6 log.go:172] (0xc001dec370) (0xc002328500) Stream removed, broadcasting: 3
I0720 21:45:01.638342       6 log.go:172] (0xc001dec370) (0xc0023285a0) Stream removed, broadcasting: 5
Jul 20 21:45:01.638: INFO: Found all expected endpoints: [netserver-0]
Jul 20 21:45:01.641: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.108:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2296 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:45:01.641: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:45:01.676608       6 log.go:172] (0xc001deca50) (0xc001c84f00) Create stream
I0720 21:45:01.676639       6 log.go:172] (0xc001deca50) (0xc001c84f00) Stream added, broadcasting: 1
I0720 21:45:01.678778       6 log.go:172] (0xc001deca50) Reply frame received for 1
I0720 21:45:01.678833       6 log.go:172] (0xc001deca50) (0xc000d1a140) Create stream
I0720 21:45:01.678854       6 log.go:172] (0xc001deca50) (0xc000d1a140) Stream added, broadcasting: 3
I0720 21:45:01.679695       6 log.go:172] (0xc001deca50) Reply frame received for 3
I0720 21:45:01.679737       6 log.go:172] (0xc001deca50) (0xc001c84fa0) Create stream
I0720 21:45:01.679752       6 log.go:172] (0xc001deca50) (0xc001c84fa0) Stream added, broadcasting: 5
I0720 21:45:01.680678       6 log.go:172] (0xc001deca50) Reply frame received for 5
I0720 21:45:01.740580       6 log.go:172] (0xc001deca50) Data frame received for 3
I0720 21:45:01.740608       6 log.go:172] (0xc000d1a140) (3) Data frame handling
I0720 21:45:01.740623       6 log.go:172] (0xc000d1a140) (3) Data frame sent
I0720 21:45:01.740824       6 log.go:172] (0xc001deca50) Data frame received for 5
I0720 21:45:01.740838       6 log.go:172] (0xc001c84fa0) (5) Data frame handling
I0720 21:45:01.740938       6 log.go:172] (0xc001deca50) Data frame received for 3
I0720 21:45:01.740954       6 log.go:172] (0xc000d1a140) (3) Data frame handling
I0720 21:45:01.742986       6 log.go:172] (0xc001deca50) Data frame received for 1
I0720 21:45:01.742997       6 log.go:172] (0xc001c84f00) (1) Data frame handling
I0720 21:45:01.743005       6 log.go:172] (0xc001c84f00) (1) Data frame sent
I0720 21:45:01.743104       6 log.go:172] (0xc001deca50) (0xc001c84f00) Stream removed, broadcasting: 1
I0720 21:45:01.743183       6 log.go:172] (0xc001deca50) (0xc001c84f00) Stream removed, broadcasting: 1
I0720 21:45:01.743206       6 log.go:172] (0xc001deca50) (0xc000d1a140) Stream removed, broadcasting: 3
I0720 21:45:01.743360       6 log.go:172] (0xc001deca50) Go away received
I0720 21:45:01.743420       6 log.go:172] (0xc001deca50) (0xc001c84fa0) Stream removed, broadcasting: 5
Jul 20 21:45:01.743: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:01.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2296" for this suite.

• [SLOW TEST:22.471 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2870,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:01.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:45:01.831: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7ee3f363-3623-4207-b797-4cdcbc58027e" in namespace "security-context-test-9528" to be "success or failure"
Jul 20 21:45:01.868: INFO: Pod "busybox-readonly-false-7ee3f363-3623-4207-b797-4cdcbc58027e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.312854ms
Jul 20 21:45:03.902: INFO: Pod "busybox-readonly-false-7ee3f363-3623-4207-b797-4cdcbc58027e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070587477s
Jul 20 21:45:05.906: INFO: Pod "busybox-readonly-false-7ee3f363-3623-4207-b797-4cdcbc58027e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07476546s
Jul 20 21:45:05.906: INFO: Pod "busybox-readonly-false-7ee3f363-3623-4207-b797-4cdcbc58027e" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:05.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9528" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2880,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:05.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-991349f0-865e-495f-9138-77d596dcb7d8
STEP: Creating a pod to test consume secrets
Jul 20 21:45:06.041: INFO: Waiting up to 5m0s for pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68" in namespace "secrets-9084" to be "success or failure"
Jul 20 21:45:06.060: INFO: Pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68": Phase="Pending", Reason="", readiness=false. Elapsed: 18.759736ms
Jul 20 21:45:08.210: INFO: Pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169252808s
Jul 20 21:45:10.214: INFO: Pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17294522s
Jul 20 21:45:12.218: INFO: Pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177309642s
STEP: Saw pod success
Jul 20 21:45:12.218: INFO: Pod "pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68" satisfied condition "success or failure"
Jul 20 21:45:12.222: INFO: Trying to get logs from node jerma-worker pod pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68 container secret-volume-test: 
STEP: delete the pod
Jul 20 21:45:12.263: INFO: Waiting for pod pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68 to disappear
Jul 20 21:45:12.267: INFO: Pod pod-secrets-572b45c3-b44c-4e02-8d87-06fa96057f68 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:12.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9084" for this suite.

• [SLOW TEST:6.357 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2909,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:12.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 20 21:45:12.348: INFO: Waiting up to 5m0s for pod "pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551" in namespace "emptydir-2133" to be "success or failure"
Jul 20 21:45:12.364: INFO: Pod "pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551": Phase="Pending", Reason="", readiness=false. Elapsed: 15.569857ms
Jul 20 21:45:14.503: INFO: Pod "pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15418267s
Jul 20 21:45:16.506: INFO: Pod "pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157814898s
STEP: Saw pod success
Jul 20 21:45:16.506: INFO: Pod "pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551" satisfied condition "success or failure"
Jul 20 21:45:16.509: INFO: Trying to get logs from node jerma-worker2 pod pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551 container test-container: 
STEP: delete the pod
Jul 20 21:45:16.563: INFO: Waiting for pod pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551 to disappear
Jul 20 21:45:16.567: INFO: Pod pod-a56ff4ed-8ecf-4f42-90b7-b6098f153551 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:16.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2133" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2937,"failed":0}

------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:16.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 20 21:45:16.730: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878051 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 20 21:45:16.730: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878052 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 20 21:45:16.730: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878053 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 20 21:45:26.767: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878101 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 20 21:45:26.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878102 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul 20 21:45:26.767: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1805 /api/v1/namespaces/watch-1805/configmaps/e2e-watch-test-label-changed d96eafd3-c78d-412e-b4e0-aff7b976c302 2878103 0 2020-07-20 21:45:16 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:26.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1805" for this suite.

• [SLOW TEST:10.193 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":184,"skipped":2937,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:26.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Jul 20 21:45:30.896: INFO: Pod pod-hostip-8864fc69-e377-495f-9022-af73216bc2c9 has hostIP: 172.18.0.10
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:30.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6101" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2953,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:30.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:45:31.608: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:45:33.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878331, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878331, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878331, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878331, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:45:36.668: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:36.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2493" for this suite.
STEP: Destroying namespace "webhook-2493-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.289 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":186,"skipped":2953,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:37.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:45:37.287: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c" in namespace "security-context-test-1712" to be "success or failure"
Jul 20 21:45:37.291: INFO: Pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30084ms
Jul 20 21:45:39.327: INFO: Pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040330945s
Jul 20 21:45:41.353: INFO: Pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065871087s
Jul 20 21:45:41.353: INFO: Pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c" satisfied condition "success or failure"
Jul 20 21:45:41.360: INFO: Got logs for pod "busybox-privileged-false-1d579eb8-f452-4af0-a1a6-7f52af3a215c": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:41.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1712" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":2962,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:41.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:45:41.637: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cc5594c5-676c-4142-9455-c3ef159fb9f5", Controller:(*bool)(0xc0052413a2), BlockOwnerDeletion:(*bool)(0xc0052413a3)}}
Jul 20 21:45:41.652: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5f64191e-6923-4530-a55f-33e73bd03dcd", Controller:(*bool)(0xc005241562), BlockOwnerDeletion:(*bool)(0xc005241563)}}
Jul 20 21:45:41.689: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4eaa215e-acc6-4986-a7c2-b1462221d7c8", Controller:(*bool)(0xc00524170a), BlockOwnerDeletion:(*bool)(0xc00524170b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3295" for this suite.

• [SLOW TEST:5.334 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":188,"skipped":2980,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:46.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul 20 21:45:46.842: INFO: Pod name pod-release: Found 0 pods out of 1
Jul 20 21:45:51.853: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:45:51.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-34" for this suite.

• [SLOW TEST:5.282 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":189,"skipped":2991,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:45:52.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-f0a3b56a-a178-450c-b64e-0dae786d927c
STEP: Creating configMap with name cm-test-opt-upd-a3f84fe5-bc4b-4a74-a632-41858322432e
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f0a3b56a-a178-450c-b64e-0dae786d927c
STEP: Updating configmap cm-test-opt-upd-a3f84fe5-bc4b-4a74-a632-41858322432e
STEP: Creating configMap with name cm-test-opt-create-48b1781f-2bdf-4afc-91a3-d39594b61d9d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:02.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1314" for this suite.

• [SLOW TEST:10.527 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2996,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:02.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-dfb2953b-0f57-4ff4-868b-ad48afb0e660
STEP: Creating a pod to test consume configMaps
Jul 20 21:46:02.683: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899" in namespace "projected-4880" to be "success or failure"
Jul 20 21:46:02.706: INFO: Pod "pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899": Phase="Pending", Reason="", readiness=false. Elapsed: 23.111234ms
Jul 20 21:46:04.710: INFO: Pod "pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027424987s
Jul 20 21:46:06.714: INFO: Pod "pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031097123s
STEP: Saw pod success
Jul 20 21:46:06.714: INFO: Pod "pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899" satisfied condition "success or failure"
Jul 20 21:46:06.717: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 21:46:06.758: INFO: Waiting for pod pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899 to disappear
Jul 20 21:46:06.766: INFO: Pod pod-projected-configmaps-d5a60913-a61b-464a-a2ac-8c18a539f899 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:06.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4880" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3000,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:06.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-68d9d9cb-989c-442a-863c-0c71e8698025
STEP: Creating a pod to test consume secrets
Jul 20 21:46:06.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01" in namespace "projected-8282" to be "success or failure"
Jul 20 21:46:06.879: INFO: Pod "pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725633ms
Jul 20 21:46:08.911: INFO: Pod "pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034936212s
Jul 20 21:46:11.006: INFO: Pod "pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130762413s
STEP: Saw pod success
Jul 20 21:46:11.006: INFO: Pod "pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01" satisfied condition "success or failure"
Jul 20 21:46:11.009: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01 container projected-secret-volume-test: 
STEP: delete the pod
Jul 20 21:46:11.270: INFO: Waiting for pod pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01 to disappear
Jul 20 21:46:11.274: INFO: Pod pod-projected-secrets-b3c99d46-b659-4e3b-b421-1c7b0e8adf01 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:11.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8282" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3029,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:11.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2937
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-2937
Jul 20 21:46:11.418: INFO: Found 0 stateful pods, waiting for 1
Jul 20 21:46:21.422: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 20 21:46:21.449: INFO: Deleting all statefulset in ns statefulset-2937
Jul 20 21:46:21.570: INFO: Scaling statefulset ss to 0
Jul 20 21:46:41.662: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:46:41.665: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:41.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2937" for this suite.

• [SLOW TEST:30.443 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":193,"skipped":3030,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:41.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:46.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3668" for this suite.

• [SLOW TEST:5.249 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":194,"skipped":3043,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:46.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:46:51.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5175" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3090,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:46:51.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b in namespace container-probe-5043
Jul 20 21:46:55.423: INFO: Started pod liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b in namespace container-probe-5043
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 21:46:55.425: INFO: Initial restart count of pod liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is 0
Jul 20 21:47:09.468: INFO: Restart count of pod container-probe-5043/liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is now 1 (14.042829442s elapsed)
Jul 20 21:47:29.546: INFO: Restart count of pod container-probe-5043/liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is now 2 (34.120888081s elapsed)
Jul 20 21:47:49.638: INFO: Restart count of pod container-probe-5043/liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is now 3 (54.212555125s elapsed)
Jul 20 21:48:09.691: INFO: Restart count of pod container-probe-5043/liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is now 4 (1m14.266229925s elapsed)
Jul 20 21:49:23.903: INFO: Restart count of pod container-probe-5043/liveness-2cfe624e-e7ad-465c-ab3f-fec400d6200b is now 5 (2m28.477961434s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:23.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5043" for this suite.

• [SLOW TEST:152.697 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3107,"failed":0}
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:23.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3816.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3816.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 21:49:30.199: INFO: DNS probes using dns-3816/dns-test-03a82e5f-3476-46cb-83ce-6229499ff1b0 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:30.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3816" for this suite.

• [SLOW TEST:6.366 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":278,"completed":197,"skipped":3107,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:30.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul 20 21:49:30.739: INFO: Created pod &Pod{ObjectMeta:{dns-9775  dns-9775 /api/v1/namespaces/dns-9775/pods/dns-9775 bb266e59-d5be-4461-b2c4-bf9e56da3f75 2879334 0 2020-07-20 21:49:30 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jdkwx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jdkwx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jdkwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jul 20 21:49:34.804: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9775 PodName:dns-9775 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:49:34.804: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:49:34.840834       6 log.go:172] (0xc001e62370) (0xc001d65860) Create stream
I0720 21:49:34.840865       6 log.go:172] (0xc001e62370) (0xc001d65860) Stream added, broadcasting: 1
I0720 21:49:34.842700       6 log.go:172] (0xc001e62370) Reply frame received for 1
I0720 21:49:34.842755       6 log.go:172] (0xc001e62370) (0xc001d65900) Create stream
I0720 21:49:34.842772       6 log.go:172] (0xc001e62370) (0xc001d65900) Stream added, broadcasting: 3
I0720 21:49:34.843769       6 log.go:172] (0xc001e62370) Reply frame received for 3
I0720 21:49:34.843801       6 log.go:172] (0xc001e62370) (0xc0021b40a0) Create stream
I0720 21:49:34.843815       6 log.go:172] (0xc001e62370) (0xc0021b40a0) Stream added, broadcasting: 5
I0720 21:49:34.844817       6 log.go:172] (0xc001e62370) Reply frame received for 5
I0720 21:49:34.910187       6 log.go:172] (0xc001e62370) Data frame received for 3
I0720 21:49:34.910232       6 log.go:172] (0xc001d65900) (3) Data frame handling
I0720 21:49:34.910260       6 log.go:172] (0xc001d65900) (3) Data frame sent
I0720 21:49:34.910980       6 log.go:172] (0xc001e62370) Data frame received for 3
I0720 21:49:34.911001       6 log.go:172] (0xc001d65900) (3) Data frame handling
I0720 21:49:34.911016       6 log.go:172] (0xc001e62370) Data frame received for 5
I0720 21:49:34.911023       6 log.go:172] (0xc0021b40a0) (5) Data frame handling
I0720 21:49:34.912219       6 log.go:172] (0xc001e62370) Data frame received for 1
I0720 21:49:34.912240       6 log.go:172] (0xc001d65860) (1) Data frame handling
I0720 21:49:34.912253       6 log.go:172] (0xc001d65860) (1) Data frame sent
I0720 21:49:34.912276       6 log.go:172] (0xc001e62370) (0xc001d65860) Stream removed, broadcasting: 1
I0720 21:49:34.912289       6 log.go:172] (0xc001e62370) Go away received
I0720 21:49:34.912420       6 log.go:172] (0xc001e62370) (0xc001d65860) Stream removed, broadcasting: 1
I0720 21:49:34.912436       6 log.go:172] (0xc001e62370) (0xc001d65900) Stream removed, broadcasting: 3
I0720 21:49:34.912446       6 log.go:172] (0xc001e62370) (0xc0021b40a0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul 20 21:49:34.912: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9775 PodName:dns-9775 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:49:34.912: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:49:34.936359       6 log.go:172] (0xc001bbdef0) (0xc002227220) Create stream
I0720 21:49:34.936383       6 log.go:172] (0xc001bbdef0) (0xc002227220) Stream added, broadcasting: 1
I0720 21:49:34.938177       6 log.go:172] (0xc001bbdef0) Reply frame received for 1
I0720 21:49:34.938208       6 log.go:172] (0xc001bbdef0) (0xc0021b4140) Create stream
I0720 21:49:34.938218       6 log.go:172] (0xc001bbdef0) (0xc0021b4140) Stream added, broadcasting: 3
I0720 21:49:34.939071       6 log.go:172] (0xc001bbdef0) Reply frame received for 3
I0720 21:49:34.939109       6 log.go:172] (0xc001bbdef0) (0xc001d65ae0) Create stream
I0720 21:49:34.939125       6 log.go:172] (0xc001bbdef0) (0xc001d65ae0) Stream added, broadcasting: 5
I0720 21:49:34.940059       6 log.go:172] (0xc001bbdef0) Reply frame received for 5
I0720 21:49:35.009953       6 log.go:172] (0xc001bbdef0) Data frame received for 3
I0720 21:49:35.009981       6 log.go:172] (0xc0021b4140) (3) Data frame handling
I0720 21:49:35.010002       6 log.go:172] (0xc0021b4140) (3) Data frame sent
I0720 21:49:35.011085       6 log.go:172] (0xc001bbdef0) Data frame received for 5
I0720 21:49:35.011111       6 log.go:172] (0xc001d65ae0) (5) Data frame handling
I0720 21:49:35.011126       6 log.go:172] (0xc001bbdef0) Data frame received for 3
I0720 21:49:35.011133       6 log.go:172] (0xc0021b4140) (3) Data frame handling
I0720 21:49:35.012391       6 log.go:172] (0xc001bbdef0) Data frame received for 1
I0720 21:49:35.012407       6 log.go:172] (0xc002227220) (1) Data frame handling
I0720 21:49:35.012418       6 log.go:172] (0xc002227220) (1) Data frame sent
I0720 21:49:35.012541       6 log.go:172] (0xc001bbdef0) (0xc002227220) Stream removed, broadcasting: 1
I0720 21:49:35.012612       6 log.go:172] (0xc001bbdef0) (0xc002227220) Stream removed, broadcasting: 1
I0720 21:49:35.012630       6 log.go:172] (0xc001bbdef0) (0xc0021b4140) Stream removed, broadcasting: 3
I0720 21:49:35.012646       6 log.go:172] (0xc001bbdef0) Go away received
I0720 21:49:35.012690       6 log.go:172] (0xc001bbdef0) (0xc001d65ae0) Stream removed, broadcasting: 5
Jul 20 21:49:35.012: INFO: Deleting pod dns-9775...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:35.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9775" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":198,"skipped":3125,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:35.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:49:39.177: INFO: Waiting up to 5m0s for pod "client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f" in namespace "pods-589" to be "success or failure"
Jul 20 21:49:39.196: INFO: Pod "client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.794539ms
Jul 20 21:49:41.273: INFO: Pod "client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095243908s
Jul 20 21:49:43.276: INFO: Pod "client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098764008s
STEP: Saw pod success
Jul 20 21:49:43.276: INFO: Pod "client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f" satisfied condition "success or failure"
Jul 20 21:49:43.279: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f container env3cont: 
STEP: delete the pod
Jul 20 21:49:43.332: INFO: Waiting for pod client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f to disappear
Jul 20 21:49:43.344: INFO: Pod client-envvars-88ade1ff-3ddd-47a3-af98-bf0832574d5f no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:43.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-589" for this suite.

• [SLOW TEST:8.333 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3160,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:43.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-3e788853-73b3-492a-b90f-6d0cceec0269
STEP: Creating a pod to test consume secrets
Jul 20 21:49:43.442: INFO: Waiting up to 5m0s for pod "pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1" in namespace "secrets-4276" to be "success or failure"
Jul 20 21:49:43.446: INFO: Pod "pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726296ms
Jul 20 21:49:45.519: INFO: Pod "pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076874077s
Jul 20 21:49:47.524: INFO: Pod "pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082063838s
STEP: Saw pod success
Jul 20 21:49:47.524: INFO: Pod "pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1" satisfied condition "success or failure"
Jul 20 21:49:47.527: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1 container secret-volume-test: 
STEP: delete the pod
Jul 20 21:49:47.554: INFO: Waiting for pod pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1 to disappear
Jul 20 21:49:47.559: INFO: Pod pod-secrets-2b4445c0-dd5f-4c48-aef2-6e1237e700d1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:47.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4276" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3165,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:47.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:49:48.052: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:49:50.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:49:52.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878588, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:49:55.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul 20 21:49:59.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-7463 to-be-attached-pod -i -c=container1'
Jul 20 21:49:59.582: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:49:59.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7463" for this suite.
STEP: Destroying namespace "webhook-7463-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.158 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":201,"skipped":3184,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:49:59.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:50:01.009: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:50:03.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878601, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878601, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878601, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878600, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:50:06.147: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:06.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8360" for this suite.
STEP: Destroying namespace "webhook-8360-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.020 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":202,"skipped":3185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:06.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 20 21:50:06.862: INFO: Waiting up to 5m0s for pod "pod-8b972330-229d-4a13-893c-766cda5fea70" in namespace "emptydir-2882" to be "success or failure"
Jul 20 21:50:06.891: INFO: Pod "pod-8b972330-229d-4a13-893c-766cda5fea70": Phase="Pending", Reason="", readiness=false. Elapsed: 29.071781ms
Jul 20 21:50:08.895: INFO: Pod "pod-8b972330-229d-4a13-893c-766cda5fea70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033122811s
Jul 20 21:50:10.899: INFO: Pod "pod-8b972330-229d-4a13-893c-766cda5fea70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037163442s
STEP: Saw pod success
Jul 20 21:50:10.900: INFO: Pod "pod-8b972330-229d-4a13-893c-766cda5fea70" satisfied condition "success or failure"
Jul 20 21:50:10.903: INFO: Trying to get logs from node jerma-worker pod pod-8b972330-229d-4a13-893c-766cda5fea70 container test-container: 
STEP: delete the pod
Jul 20 21:50:10.994: INFO: Waiting for pod pod-8b972330-229d-4a13-893c-766cda5fea70 to disappear
Jul 20 21:50:11.017: INFO: Pod pod-8b972330-229d-4a13-893c-766cda5fea70 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:11.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2882" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3218,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:11.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:50:11.580: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:50:13.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:50:15.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878611, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:50:18.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:18.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-720" for this suite.
STEP: Destroying namespace "webhook-720-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.902 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":204,"skipped":3236,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:18.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0720 21:50:20.565908       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 21:50:20.565: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:20.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8259" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":205,"skipped":3242,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:20.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5568
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5568
STEP: creating replication controller externalsvc in namespace services-5568
I0720 21:50:21.151108       6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5568, replica count: 2
I0720 21:50:24.201555       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:50:27.201776       6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul 20 21:50:27.270: INFO: Creating new exec pod
Jul 20 21:50:31.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5568 execpodflw2b -- /bin/sh -x -c nslookup nodeport-service'
Jul 20 21:50:31.542: INFO: stderr: "I0720 21:50:31.436809    3448 log.go:172] (0xc000b77290) (0xc000b6e460) Create stream\nI0720 21:50:31.436896    3448 log.go:172] (0xc000b77290) (0xc000b6e460) Stream added, broadcasting: 1\nI0720 21:50:31.438611    3448 log.go:172] (0xc000b77290) Reply frame received for 1\nI0720 21:50:31.438671    3448 log.go:172] (0xc000b77290) (0xc000b6e500) Create stream\nI0720 21:50:31.438689    3448 log.go:172] (0xc000b77290) (0xc000b6e500) Stream added, broadcasting: 3\nI0720 21:50:31.439628    3448 log.go:172] (0xc000b77290) Reply frame received for 3\nI0720 21:50:31.439678    3448 log.go:172] (0xc000b77290) (0xc000bbc140) Create stream\nI0720 21:50:31.439705    3448 log.go:172] (0xc000b77290) (0xc000bbc140) Stream added, broadcasting: 5\nI0720 21:50:31.440518    3448 log.go:172] (0xc000b77290) Reply frame received for 5\nI0720 21:50:31.529369    3448 log.go:172] (0xc000b77290) Data frame received for 5\nI0720 21:50:31.529394    3448 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0720 21:50:31.529414    3448 log.go:172] (0xc000bbc140) (5) Data frame sent\n+ nslookup nodeport-service\nI0720 21:50:31.535666    3448 log.go:172] (0xc000b77290) Data frame received for 3\nI0720 21:50:31.535700    3448 log.go:172] (0xc000b6e500) (3) Data frame handling\nI0720 21:50:31.535716    3448 log.go:172] (0xc000b6e500) (3) Data frame sent\nI0720 21:50:31.536604    3448 log.go:172] (0xc000b77290) Data frame received for 3\nI0720 21:50:31.536617    3448 log.go:172] (0xc000b6e500) (3) Data frame handling\nI0720 21:50:31.536630    3448 log.go:172] (0xc000b6e500) (3) Data frame sent\nI0720 21:50:31.537274    3448 log.go:172] (0xc000b77290) Data frame received for 5\nI0720 21:50:31.537295    3448 log.go:172] (0xc000bbc140) (5) Data frame handling\nI0720 21:50:31.537317    3448 log.go:172] (0xc000b77290) Data frame received for 3\nI0720 21:50:31.537330    3448 log.go:172] (0xc000b6e500) (3) Data frame handling\nI0720 21:50:31.538677    3448 log.go:172] (0xc000b77290) Data frame received for 1\nI0720 21:50:31.538690    3448 log.go:172] (0xc000b6e460) (1) Data frame handling\nI0720 21:50:31.538697    3448 log.go:172] (0xc000b6e460) (1) Data frame sent\nI0720 21:50:31.538707    3448 log.go:172] (0xc000b77290) (0xc000b6e460) Stream removed, broadcasting: 1\nI0720 21:50:31.538725    3448 log.go:172] (0xc000b77290) Go away received\nI0720 21:50:31.539011    3448 log.go:172] (0xc000b77290) (0xc000b6e460) Stream removed, broadcasting: 1\nI0720 21:50:31.539027    3448 log.go:172] (0xc000b77290) (0xc000b6e500) Stream removed, broadcasting: 3\nI0720 21:50:31.539035    3448 log.go:172] (0xc000b77290) (0xc000bbc140) Stream removed, broadcasting: 5\n"
Jul 20 21:50:31.542: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5568.svc.cluster.local\tcanonical name = externalsvc.services-5568.svc.cluster.local.\nName:\texternalsvc.services-5568.svc.cluster.local\nAddress: 10.97.112.55\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5568, will wait for the garbage collector to delete the pods
Jul 20 21:50:31.623: INFO: Deleting ReplicationController externalsvc took: 27.952836ms
Jul 20 21:50:31.723: INFO: Terminating ReplicationController externalsvc pods took: 100.247437ms
Jul 20 21:50:47.456: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:47.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5568" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:26.916 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":206,"skipped":3245,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:47.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-projected-all-test-volume-970e7467-f7f9-4fe7-8590-41a478119c9a
STEP: Creating secret with name secret-projected-all-test-volume-c4b7c5d5-75b1-43d7-9f7f-a9f30ce3fc2e
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 20 21:50:47.587: INFO: Waiting up to 5m0s for pod "projected-volume-2028cce0-68d1-4273-b019-8ca02069060a" in namespace "projected-3212" to be "success or failure"
Jul 20 21:50:47.594: INFO: Pod "projected-volume-2028cce0-68d1-4273-b019-8ca02069060a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.513928ms
Jul 20 21:50:49.707: INFO: Pod "projected-volume-2028cce0-68d1-4273-b019-8ca02069060a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119562172s
Jul 20 21:50:51.710: INFO: Pod "projected-volume-2028cce0-68d1-4273-b019-8ca02069060a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122927583s
STEP: Saw pod success
Jul 20 21:50:51.710: INFO: Pod "projected-volume-2028cce0-68d1-4273-b019-8ca02069060a" satisfied condition "success or failure"
Jul 20 21:50:51.712: INFO: Trying to get logs from node jerma-worker pod projected-volume-2028cce0-68d1-4273-b019-8ca02069060a container projected-all-volume-test: 
STEP: delete the pod
Jul 20 21:50:51.732: INFO: Waiting for pod projected-volume-2028cce0-68d1-4273-b019-8ca02069060a to disappear
Jul 20 21:50:51.754: INFO: Pod projected-volume-2028cce0-68d1-4273-b019-8ca02069060a no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:51.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3212" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3319,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:51.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-ad08699e-3958-4315-973d-cceed7281b85
STEP: Creating a pod to test consume configMaps
Jul 20 21:50:51.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9" in namespace "configmap-1554" to be "success or failure"
Jul 20 21:50:51.921: INFO: Pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.454488ms
Jul 20 21:50:54.414: INFO: Pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.496000193s
Jul 20 21:50:56.418: INFO: Pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499745206s
Jul 20 21:50:58.422: INFO: Pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.503888498s
STEP: Saw pod success
Jul 20 21:50:58.422: INFO: Pod "pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9" satisfied condition "success or failure"
Jul 20 21:50:58.425: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9 container configmap-volume-test: 
STEP: delete the pod
Jul 20 21:50:58.450: INFO: Waiting for pod pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9 to disappear
Jul 20 21:50:58.479: INFO: Pod pod-configmaps-ec7d7d6c-2e80-43e0-8841-e7a14772d3e9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:50:58.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1554" for this suite.

• [SLOW TEST:6.772 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3355,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:50:58.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 21:50:59.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 21:51:01.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 21:51:03.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730878659, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 21:51:06.342: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:51:06.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9752" for this suite.
STEP: Destroying namespace "webhook-9752-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.497 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":209,"skipped":3371,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:51:07.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 20 21:51:07.122: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330" in namespace "downward-api-6480" to be "success or failure"
Jul 20 21:51:07.153: INFO: Pod "downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330": Phase="Pending", Reason="", readiness=false. Elapsed: 30.448588ms
Jul 20 21:51:09.157: INFO: Pod "downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034748934s
Jul 20 21:51:11.173: INFO: Pod "downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050564688s
STEP: Saw pod success
Jul 20 21:51:11.173: INFO: Pod "downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330" satisfied condition "success or failure"
Jul 20 21:51:11.175: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330 container client-container: 
STEP: delete the pod
Jul 20 21:51:11.209: INFO: Waiting for pod downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330 to disappear
Jul 20 21:51:11.228: INFO: Pod downwardapi-volume-bd2b9316-29a8-4e1c-a763-c5b1bf786330 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:51:11.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6480" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3388,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:51:11.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7558.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7558.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 20 21:51:17.377: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.383: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.387: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.389: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.397: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.398: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.400: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.402: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:17.407: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:22.412: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.416: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.419: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.422: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.430: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.433: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.435: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.438: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:22.445: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:27.411: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.445: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.465: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.468: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.475: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.477: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.479: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.481: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:27.485: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:32.412: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.416: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.419: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.423: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.431: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.434: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.437: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.440: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:32.446: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:37.412: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.414: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.417: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.420: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.429: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.431: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.434: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.437: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:37.444: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:42.413: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.416: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.419: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.422: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.432: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.435: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.438: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.442: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local from pod dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08: the server could not find the requested resource (get pods dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08)
Jul 20 21:51:42.448: INFO: Lookups using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7558.svc.cluster.local jessie_udp@dns-test-service-2.dns-7558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7558.svc.cluster.local]

Jul 20 21:51:47.439: INFO: DNS probes using dns-7558/dns-test-f4f0d943-646e-4369-94fc-597b3a7c6e08 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:51:47.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7558" for this suite.

• [SLOW TEST:36.850 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":211,"skipped":3393,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:51:48.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-4743/configmap-test-621d6878-ad0f-4ae0-8216-1e2de81fa92f
STEP: Creating a pod to test consume configMaps
Jul 20 21:51:48.230: INFO: Waiting up to 5m0s for pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0" in namespace "configmap-4743" to be "success or failure"
Jul 20 21:51:48.240: INFO: Pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.989073ms
Jul 20 21:51:50.259: INFO: Pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028419856s
Jul 20 21:51:52.263: INFO: Pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.032613748s
Jul 20 21:51:54.267: INFO: Pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036723645s
STEP: Saw pod success
Jul 20 21:51:54.267: INFO: Pod "pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0" satisfied condition "success or failure"
Jul 20 21:51:54.271: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0 container env-test: 
STEP: delete the pod
Jul 20 21:51:54.305: INFO: Waiting for pod pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0 to disappear
Jul 20 21:51:54.320: INFO: Pod pod-configmaps-81f22fd4-09a0-4925-aba3-a0f44b7912a0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:51:54.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4743" for this suite.

• [SLOW TEST:6.311 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3397,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:51:54.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:07.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3954" for this suite.

• [SLOW TEST:13.202 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":213,"skipped":3405,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:07.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-062bb607-2e89-4679-bb98-8864063a95ac
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:13.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9443" for this suite.

• [SLOW TEST:6.204 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3426,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:13.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4196" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":215,"skipped":3429,"failed":0}
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:13.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:14.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8002" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3430,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:14.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 20 21:52:14.177: INFO: Waiting up to 5m0s for pod "pod-12191ec2-c6ef-4c49-9545-5239c78d76ce" in namespace "emptydir-8984" to be "success or failure"
Jul 20 21:52:14.245: INFO: Pod "pod-12191ec2-c6ef-4c49-9545-5239c78d76ce": Phase="Pending", Reason="", readiness=false. Elapsed: 68.432075ms
Jul 20 21:52:16.249: INFO: Pod "pod-12191ec2-c6ef-4c49-9545-5239c78d76ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072075639s
Jul 20 21:52:18.253: INFO: Pod "pod-12191ec2-c6ef-4c49-9545-5239c78d76ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075957874s
STEP: Saw pod success
Jul 20 21:52:18.253: INFO: Pod "pod-12191ec2-c6ef-4c49-9545-5239c78d76ce" satisfied condition "success or failure"
Jul 20 21:52:18.255: INFO: Trying to get logs from node jerma-worker pod pod-12191ec2-c6ef-4c49-9545-5239c78d76ce container test-container: 
STEP: delete the pod
Jul 20 21:52:18.272: INFO: Waiting for pod pod-12191ec2-c6ef-4c49-9545-5239c78d76ce to disappear
Jul 20 21:52:18.292: INFO: Pod pod-12191ec2-c6ef-4c49-9545-5239c78d76ce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:18.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8984" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3443,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:18.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 20 21:52:18.424: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d" in namespace "downward-api-1082" to be "success or failure"
Jul 20 21:52:18.439: INFO: Pod "downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.851522ms
Jul 20 21:52:20.449: INFO: Pod "downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025331313s
Jul 20 21:52:22.491: INFO: Pod "downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067114461s
STEP: Saw pod success
Jul 20 21:52:22.491: INFO: Pod "downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d" satisfied condition "success or failure"
Jul 20 21:52:22.495: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d container client-container: 
STEP: delete the pod
Jul 20 21:52:22.559: INFO: Waiting for pod downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d to disappear
Jul 20 21:52:22.584: INFO: Pod downwardapi-volume-2590af24-b189-4c0b-8c66-ea91ff117f6d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:22.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1082" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:22.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-8fd012eb-78ae-4e3e-883f-0d4fdf50ff94
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-8fd012eb-78ae-4e3e-883f-0d4fdf50ff94
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:52:29.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6196" for this suite.

• [SLOW TEST:6.489 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3505,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:52:29.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:53:29.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3466" for this suite.

• [SLOW TEST:60.077 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3549,"failed":0}
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:53:29.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 21:53:29.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3421'
Jul 20 21:53:29.427: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 20 21:53:29.427: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631
Jul 20 21:53:33.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3421'
Jul 20 21:53:37.521: INFO: stderr: ""
Jul 20 21:53:37.521: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:53:37.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3421" for this suite.

• [SLOW TEST:8.340 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":221,"skipped":3549,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:53:37.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 20 21:53:47.728: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:47.728: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:47.758879       6 log.go:172] (0xc001e622c0) (0xc00296a3c0) Create stream
I0720 21:53:47.758912       6 log.go:172] (0xc001e622c0) (0xc00296a3c0) Stream added, broadcasting: 1
I0720 21:53:47.760995       6 log.go:172] (0xc001e622c0) Reply frame received for 1
I0720 21:53:47.761028       6 log.go:172] (0xc001e622c0) (0xc00296a640) Create stream
I0720 21:53:47.761040       6 log.go:172] (0xc001e622c0) (0xc00296a640) Stream added, broadcasting: 3
I0720 21:53:47.762061       6 log.go:172] (0xc001e622c0) Reply frame received for 3
I0720 21:53:47.762101       6 log.go:172] (0xc001e622c0) (0xc00296aaa0) Create stream
I0720 21:53:47.762118       6 log.go:172] (0xc001e622c0) (0xc00296aaa0) Stream added, broadcasting: 5
I0720 21:53:47.762992       6 log.go:172] (0xc001e622c0) Reply frame received for 5
I0720 21:53:47.839633       6 log.go:172] (0xc001e622c0) Data frame received for 3
I0720 21:53:47.839687       6 log.go:172] (0xc00296a640) (3) Data frame handling
I0720 21:53:47.839711       6 log.go:172] (0xc00296a640) (3) Data frame sent
I0720 21:53:47.839730       6 log.go:172] (0xc001e622c0) Data frame received for 3
I0720 21:53:47.839742       6 log.go:172] (0xc00296a640) (3) Data frame handling
I0720 21:53:47.839806       6 log.go:172] (0xc001e622c0) Data frame received for 5
I0720 21:53:47.839863       6 log.go:172] (0xc00296aaa0) (5) Data frame handling
I0720 21:53:47.841398       6 log.go:172] (0xc001e622c0) Data frame received for 1
I0720 21:53:47.841431       6 log.go:172] (0xc00296a3c0) (1) Data frame handling
I0720 21:53:47.841460       6 log.go:172] (0xc00296a3c0) (1) Data frame sent
I0720 21:53:47.841480       6 log.go:172] (0xc001e622c0) (0xc00296a3c0) Stream removed, broadcasting: 1
I0720 21:53:47.841501       6 log.go:172] (0xc001e622c0) Go away received
I0720 21:53:47.841597       6 log.go:172] (0xc001e622c0) (0xc00296a3c0) Stream removed, broadcasting: 1
I0720 21:53:47.841624       6 log.go:172] (0xc001e622c0) (0xc00296a640) Stream removed, broadcasting: 3
I0720 21:53:47.841632       6 log.go:172] (0xc001e622c0) (0xc00296aaa0) Stream removed, broadcasting: 5
Jul 20 21:53:47.841: INFO: Exec stderr: ""
Jul 20 21:53:47.841: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:47.841: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:47.871871       6 log.go:172] (0xc002410580) (0xc002717180) Create stream
I0720 21:53:47.871900       6 log.go:172] (0xc002410580) (0xc002717180) Stream added, broadcasting: 1
I0720 21:53:47.874079       6 log.go:172] (0xc002410580) Reply frame received for 1
I0720 21:53:47.874125       6 log.go:172] (0xc002410580) (0xc00291cc80) Create stream
I0720 21:53:47.874146       6 log.go:172] (0xc002410580) (0xc00291cc80) Stream added, broadcasting: 3
I0720 21:53:47.874949       6 log.go:172] (0xc002410580) Reply frame received for 3
I0720 21:53:47.874979       6 log.go:172] (0xc002410580) (0xc002714500) Create stream
I0720 21:53:47.874993       6 log.go:172] (0xc002410580) (0xc002714500) Stream added, broadcasting: 5
I0720 21:53:47.876041       6 log.go:172] (0xc002410580) Reply frame received for 5
I0720 21:53:47.941343       6 log.go:172] (0xc002410580) Data frame received for 5
I0720 21:53:47.941378       6 log.go:172] (0xc002714500) (5) Data frame handling
I0720 21:53:47.941399       6 log.go:172] (0xc002410580) Data frame received for 3
I0720 21:53:47.941412       6 log.go:172] (0xc00291cc80) (3) Data frame handling
I0720 21:53:47.941453       6 log.go:172] (0xc00291cc80) (3) Data frame sent
I0720 21:53:47.941472       6 log.go:172] (0xc002410580) Data frame received for 3
I0720 21:53:47.941484       6 log.go:172] (0xc00291cc80) (3) Data frame handling
I0720 21:53:47.942775       6 log.go:172] (0xc002410580) Data frame received for 1
I0720 21:53:47.942798       6 log.go:172] (0xc002717180) (1) Data frame handling
I0720 21:53:47.942826       6 log.go:172] (0xc002717180) (1) Data frame sent
I0720 21:53:47.942848       6 log.go:172] (0xc002410580) (0xc002717180) Stream removed, broadcasting: 1
I0720 21:53:47.942941       6 log.go:172] (0xc002410580) (0xc002717180) Stream removed, broadcasting: 1
I0720 21:53:47.942956       6 log.go:172] (0xc002410580) (0xc00291cc80) Stream removed, broadcasting: 3
I0720 21:53:47.943112       6 log.go:172] (0xc002410580) (0xc002714500) Stream removed, broadcasting: 5
I0720 21:53:47.943206       6 log.go:172] (0xc002410580) Go away received
Jul 20 21:53:47.943: INFO: Exec stderr: ""
Jul 20 21:53:47.943: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:47.943: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:47.973191       6 log.go:172] (0xc001dec8f0) (0xc002714e60) Create stream
I0720 21:53:47.973224       6 log.go:172] (0xc001dec8f0) (0xc002714e60) Stream added, broadcasting: 1
I0720 21:53:47.975593       6 log.go:172] (0xc001dec8f0) Reply frame received for 1
I0720 21:53:47.975622       6 log.go:172] (0xc001dec8f0) (0xc00291cd20) Create stream
I0720 21:53:47.975631       6 log.go:172] (0xc001dec8f0) (0xc00291cd20) Stream added, broadcasting: 3
I0720 21:53:47.976422       6 log.go:172] (0xc001dec8f0) Reply frame received for 3
I0720 21:53:47.976491       6 log.go:172] (0xc001dec8f0) (0xc00291cdc0) Create stream
I0720 21:53:47.976511       6 log.go:172] (0xc001dec8f0) (0xc00291cdc0) Stream added, broadcasting: 5
I0720 21:53:47.977409       6 log.go:172] (0xc001dec8f0) Reply frame received for 5
I0720 21:53:48.036499       6 log.go:172] (0xc001dec8f0) Data frame received for 3
I0720 21:53:48.036545       6 log.go:172] (0xc00291cd20) (3) Data frame handling
I0720 21:53:48.036562       6 log.go:172] (0xc00291cd20) (3) Data frame sent
I0720 21:53:48.036575       6 log.go:172] (0xc001dec8f0) Data frame received for 3
I0720 21:53:48.036590       6 log.go:172] (0xc00291cd20) (3) Data frame handling
I0720 21:53:48.036644       6 log.go:172] (0xc001dec8f0) Data frame received for 5
I0720 21:53:48.036680       6 log.go:172] (0xc00291cdc0) (5) Data frame handling
I0720 21:53:48.037994       6 log.go:172] (0xc001dec8f0) Data frame received for 1
I0720 21:53:48.038021       6 log.go:172] (0xc002714e60) (1) Data frame handling
I0720 21:53:48.038040       6 log.go:172] (0xc002714e60) (1) Data frame sent
I0720 21:53:48.038061       6 log.go:172] (0xc001dec8f0) (0xc002714e60) Stream removed, broadcasting: 1
I0720 21:53:48.038109       6 log.go:172] (0xc001dec8f0) Go away received
I0720 21:53:48.038214       6 log.go:172] (0xc001dec8f0) (0xc002714e60) Stream removed, broadcasting: 1
I0720 21:53:48.038248       6 log.go:172] (0xc001dec8f0) (0xc00291cd20) Stream removed, broadcasting: 3
I0720 21:53:48.038276       6 log.go:172] (0xc001dec8f0) (0xc00291cdc0) Stream removed, broadcasting: 5
Jul 20 21:53:48.038: INFO: Exec stderr: ""
Jul 20 21:53:48.038: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.038: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.072675       6 log.go:172] (0xc001bf8160) (0xc00291d040) Create stream
I0720 21:53:48.072702       6 log.go:172] (0xc001bf8160) (0xc00291d040) Stream added, broadcasting: 1
I0720 21:53:48.075065       6 log.go:172] (0xc001bf8160) Reply frame received for 1
I0720 21:53:48.075109       6 log.go:172] (0xc001bf8160) (0xc002714f00) Create stream
I0720 21:53:48.075126       6 log.go:172] (0xc001bf8160) (0xc002714f00) Stream added, broadcasting: 3
I0720 21:53:48.076175       6 log.go:172] (0xc001bf8160) Reply frame received for 3
I0720 21:53:48.076223       6 log.go:172] (0xc001bf8160) (0xc00296abe0) Create stream
I0720 21:53:48.076239       6 log.go:172] (0xc001bf8160) (0xc00296abe0) Stream added, broadcasting: 5
I0720 21:53:48.077292       6 log.go:172] (0xc001bf8160) Reply frame received for 5
I0720 21:53:48.140219       6 log.go:172] (0xc001bf8160) Data frame received for 5
I0720 21:53:48.140279       6 log.go:172] (0xc00296abe0) (5) Data frame handling
I0720 21:53:48.140318       6 log.go:172] (0xc001bf8160) Data frame received for 3
I0720 21:53:48.140338       6 log.go:172] (0xc002714f00) (3) Data frame handling
I0720 21:53:48.140365       6 log.go:172] (0xc002714f00) (3) Data frame sent
I0720 21:53:48.140385       6 log.go:172] (0xc001bf8160) Data frame received for 3
I0720 21:53:48.140404       6 log.go:172] (0xc002714f00) (3) Data frame handling
I0720 21:53:48.141670       6 log.go:172] (0xc001bf8160) Data frame received for 1
I0720 21:53:48.141703       6 log.go:172] (0xc00291d040) (1) Data frame handling
I0720 21:53:48.141723       6 log.go:172] (0xc00291d040) (1) Data frame sent
I0720 21:53:48.141749       6 log.go:172] (0xc001bf8160) (0xc00291d040) Stream removed, broadcasting: 1
I0720 21:53:48.141835       6 log.go:172] (0xc001bf8160) (0xc00291d040) Stream removed, broadcasting: 1
I0720 21:53:48.141852       6 log.go:172] (0xc001bf8160) (0xc002714f00) Stream removed, broadcasting: 3
I0720 21:53:48.141897       6 log.go:172] (0xc001bf8160) Go away received
I0720 21:53:48.142062       6 log.go:172] (0xc001bf8160) (0xc00296abe0) Stream removed, broadcasting: 5
Jul 20 21:53:48.142: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 20 21:53:48.142: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.142: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.176249       6 log.go:172] (0xc001ded080) (0xc0027150e0) Create stream
I0720 21:53:48.176279       6 log.go:172] (0xc001ded080) (0xc0027150e0) Stream added, broadcasting: 1
I0720 21:53:48.178750       6 log.go:172] (0xc001ded080) Reply frame received for 1
I0720 21:53:48.178787       6 log.go:172] (0xc001ded080) (0xc00291d180) Create stream
I0720 21:53:48.178800       6 log.go:172] (0xc001ded080) (0xc00291d180) Stream added, broadcasting: 3
I0720 21:53:48.179555       6 log.go:172] (0xc001ded080) Reply frame received for 3
I0720 21:53:48.179591       6 log.go:172] (0xc001ded080) (0xc00296ac80) Create stream
I0720 21:53:48.179606       6 log.go:172] (0xc001ded080) (0xc00296ac80) Stream added, broadcasting: 5
I0720 21:53:48.180592       6 log.go:172] (0xc001ded080) Reply frame received for 5
I0720 21:53:48.250592       6 log.go:172] (0xc001ded080) Data frame received for 5
I0720 21:53:48.250622       6 log.go:172] (0xc00296ac80) (5) Data frame handling
I0720 21:53:48.250640       6 log.go:172] (0xc001ded080) Data frame received for 3
I0720 21:53:48.250650       6 log.go:172] (0xc00291d180) (3) Data frame handling
I0720 21:53:48.250664       6 log.go:172] (0xc00291d180) (3) Data frame sent
I0720 21:53:48.250676       6 log.go:172] (0xc001ded080) Data frame received for 3
I0720 21:53:48.250686       6 log.go:172] (0xc00291d180) (3) Data frame handling
I0720 21:53:48.251976       6 log.go:172] (0xc001ded080) Data frame received for 1
I0720 21:53:48.252005       6 log.go:172] (0xc0027150e0) (1) Data frame handling
I0720 21:53:48.252033       6 log.go:172] (0xc0027150e0) (1) Data frame sent
I0720 21:53:48.252072       6 log.go:172] (0xc001ded080) (0xc0027150e0) Stream removed, broadcasting: 1
I0720 21:53:48.252149       6 log.go:172] (0xc001ded080) (0xc0027150e0) Stream removed, broadcasting: 1
I0720 21:53:48.252164       6 log.go:172] (0xc001ded080) (0xc00291d180) Stream removed, broadcasting: 3
I0720 21:53:48.252312       6 log.go:172] (0xc001ded080) (0xc00296ac80) Stream removed, broadcasting: 5
Jul 20 21:53:48.252: INFO: Exec stderr: ""
I0720 21:53:48.252357       6 log.go:172] (0xc001ded080) Go away received
Jul 20 21:53:48.252: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.252: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.288299       6 log.go:172] (0xc002410bb0) (0xc002717360) Create stream
I0720 21:53:48.288329       6 log.go:172] (0xc002410bb0) (0xc002717360) Stream added, broadcasting: 1
I0720 21:53:48.290670       6 log.go:172] (0xc002410bb0) Reply frame received for 1
I0720 21:53:48.290699       6 log.go:172] (0xc002410bb0) (0xc002717400) Create stream
I0720 21:53:48.290708       6 log.go:172] (0xc002410bb0) (0xc002717400) Stream added, broadcasting: 3
I0720 21:53:48.291483       6 log.go:172] (0xc002410bb0) Reply frame received for 3
I0720 21:53:48.291517       6 log.go:172] (0xc002410bb0) (0xc00291d220) Create stream
I0720 21:53:48.291530       6 log.go:172] (0xc002410bb0) (0xc00291d220) Stream added, broadcasting: 5
I0720 21:53:48.292311       6 log.go:172] (0xc002410bb0) Reply frame received for 5
I0720 21:53:48.335966       6 log.go:172] (0xc002410bb0) Data frame received for 5
I0720 21:53:48.336010       6 log.go:172] (0xc002410bb0) Data frame received for 3
I0720 21:53:48.336037       6 log.go:172] (0xc002717400) (3) Data frame handling
I0720 21:53:48.336051       6 log.go:172] (0xc002717400) (3) Data frame sent
I0720 21:53:48.336073       6 log.go:172] (0xc00291d220) (5) Data frame handling
I0720 21:53:48.336278       6 log.go:172] (0xc002410bb0) Data frame received for 3
I0720 21:53:48.336293       6 log.go:172] (0xc002717400) (3) Data frame handling
I0720 21:53:48.338087       6 log.go:172] (0xc002410bb0) Data frame received for 1
I0720 21:53:48.338147       6 log.go:172] (0xc002717360) (1) Data frame handling
I0720 21:53:48.338171       6 log.go:172] (0xc002717360) (1) Data frame sent
I0720 21:53:48.338196       6 log.go:172] (0xc002410bb0) (0xc002717360) Stream removed, broadcasting: 1
I0720 21:53:48.338227       6 log.go:172] (0xc002410bb0) Go away received
I0720 21:53:48.338400       6 log.go:172] (0xc002410bb0) (0xc002717360) Stream removed, broadcasting: 1
I0720 21:53:48.338430       6 log.go:172] (0xc002410bb0) (0xc002717400) Stream removed, broadcasting: 3
I0720 21:53:48.338444       6 log.go:172] (0xc002410bb0) (0xc00291d220) Stream removed, broadcasting: 5
Jul 20 21:53:48.338: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 20 21:53:48.338: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.338: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.374742       6 log.go:172] (0xc0024111e0) (0xc002717680) Create stream
I0720 21:53:48.374787       6 log.go:172] (0xc0024111e0) (0xc002717680) Stream added, broadcasting: 1
I0720 21:53:48.376720       6 log.go:172] (0xc0024111e0) Reply frame received for 1
I0720 21:53:48.376813       6 log.go:172] (0xc0024111e0) (0xc00291d2c0) Create stream
I0720 21:53:48.376836       6 log.go:172] (0xc0024111e0) (0xc00291d2c0) Stream added, broadcasting: 3
I0720 21:53:48.377758       6 log.go:172] (0xc0024111e0) Reply frame received for 3
I0720 21:53:48.377801       6 log.go:172] (0xc0024111e0) (0xc002715220) Create stream
I0720 21:53:48.377810       6 log.go:172] (0xc0024111e0) (0xc002715220) Stream added, broadcasting: 5
I0720 21:53:48.378796       6 log.go:172] (0xc0024111e0) Reply frame received for 5
I0720 21:53:48.453446       6 log.go:172] (0xc0024111e0) Data frame received for 3
I0720 21:53:48.453481       6 log.go:172] (0xc00291d2c0) (3) Data frame handling
I0720 21:53:48.453495       6 log.go:172] (0xc00291d2c0) (3) Data frame sent
I0720 21:53:48.453505       6 log.go:172] (0xc0024111e0) Data frame received for 3
I0720 21:53:48.453514       6 log.go:172] (0xc00291d2c0) (3) Data frame handling
I0720 21:53:48.453578       6 log.go:172] (0xc0024111e0) Data frame received for 5
I0720 21:53:48.453616       6 log.go:172] (0xc002715220) (5) Data frame handling
I0720 21:53:48.454906       6 log.go:172] (0xc0024111e0) Data frame received for 1
I0720 21:53:48.454925       6 log.go:172] (0xc002717680) (1) Data frame handling
I0720 21:53:48.454939       6 log.go:172] (0xc002717680) (1) Data frame sent
I0720 21:53:48.454953       6 log.go:172] (0xc0024111e0) (0xc002717680) Stream removed, broadcasting: 1
I0720 21:53:48.454973       6 log.go:172] (0xc0024111e0) Go away received
I0720 21:53:48.455195       6 log.go:172] (0xc0024111e0) (0xc002717680) Stream removed, broadcasting: 1
I0720 21:53:48.455219       6 log.go:172] (0xc0024111e0) (0xc00291d2c0) Stream removed, broadcasting: 3
I0720 21:53:48.455237       6 log.go:172] (0xc0024111e0) (0xc002715220) Stream removed, broadcasting: 5
Jul 20 21:53:48.455: INFO: Exec stderr: ""
Jul 20 21:53:48.455: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.455: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.479197       6 log.go:172] (0xc001e629a0) (0xc00296b2c0) Create stream
I0720 21:53:48.479218       6 log.go:172] (0xc001e629a0) (0xc00296b2c0) Stream added, broadcasting: 1
I0720 21:53:48.480926       6 log.go:172] (0xc001e629a0) Reply frame received for 1
I0720 21:53:48.480961       6 log.go:172] (0xc001e629a0) (0xc00291d360) Create stream
I0720 21:53:48.480974       6 log.go:172] (0xc001e629a0) (0xc00291d360) Stream added, broadcasting: 3
I0720 21:53:48.481859       6 log.go:172] (0xc001e629a0) Reply frame received for 3
I0720 21:53:48.481901       6 log.go:172] (0xc001e629a0) (0xc002715360) Create stream
I0720 21:53:48.481917       6 log.go:172] (0xc001e629a0) (0xc002715360) Stream added, broadcasting: 5
I0720 21:53:48.482785       6 log.go:172] (0xc001e629a0) Reply frame received for 5
I0720 21:53:48.552097       6 log.go:172] (0xc001e629a0) Data frame received for 3
I0720 21:53:48.552125       6 log.go:172] (0xc00291d360) (3) Data frame handling
I0720 21:53:48.552150       6 log.go:172] (0xc001e629a0) Data frame received for 5
I0720 21:53:48.552180       6 log.go:172] (0xc002715360) (5) Data frame handling
I0720 21:53:48.552239       6 log.go:172] (0xc00291d360) (3) Data frame sent
I0720 21:53:48.552300       6 log.go:172] (0xc001e629a0) Data frame received for 3
I0720 21:53:48.552341       6 log.go:172] (0xc00291d360) (3) Data frame handling
I0720 21:53:48.553957       6 log.go:172] (0xc001e629a0) Data frame received for 1
I0720 21:53:48.553982       6 log.go:172] (0xc00296b2c0) (1) Data frame handling
I0720 21:53:48.554008       6 log.go:172] (0xc00296b2c0) (1) Data frame sent
I0720 21:53:48.554036       6 log.go:172] (0xc001e629a0) (0xc00296b2c0) Stream removed, broadcasting: 1
I0720 21:53:48.554058       6 log.go:172] (0xc001e629a0) Go away received
I0720 21:53:48.554200       6 log.go:172] (0xc001e629a0) (0xc00296b2c0) Stream removed, broadcasting: 1
I0720 21:53:48.554230       6 log.go:172] (0xc001e629a0) (0xc00291d360) Stream removed, broadcasting: 3
I0720 21:53:48.554252       6 log.go:172] (0xc001e629a0) (0xc002715360) Stream removed, broadcasting: 5
Jul 20 21:53:48.554: INFO: Exec stderr: ""
Jul 20 21:53:48.554: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.554: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.591163       6 log.go:172] (0xc001bf8790) (0xc00291d680) Create stream
I0720 21:53:48.591198       6 log.go:172] (0xc001bf8790) (0xc00291d680) Stream added, broadcasting: 1
I0720 21:53:48.593672       6 log.go:172] (0xc001bf8790) Reply frame received for 1
I0720 21:53:48.593708       6 log.go:172] (0xc001bf8790) (0xc002717720) Create stream
I0720 21:53:48.593716       6 log.go:172] (0xc001bf8790) (0xc002717720) Stream added, broadcasting: 3
I0720 21:53:48.594755       6 log.go:172] (0xc001bf8790) Reply frame received for 3
I0720 21:53:48.594777       6 log.go:172] (0xc001bf8790) (0xc00296b360) Create stream
I0720 21:53:48.594785       6 log.go:172] (0xc001bf8790) (0xc00296b360) Stream added, broadcasting: 5
I0720 21:53:48.595825       6 log.go:172] (0xc001bf8790) Reply frame received for 5
I0720 21:53:48.645720       6 log.go:172] (0xc001bf8790) Data frame received for 5
I0720 21:53:48.645752       6 log.go:172] (0xc001bf8790) Data frame received for 3
I0720 21:53:48.645783       6 log.go:172] (0xc002717720) (3) Data frame handling
I0720 21:53:48.645800       6 log.go:172] (0xc002717720) (3) Data frame sent
I0720 21:53:48.645811       6 log.go:172] (0xc001bf8790) Data frame received for 3
I0720 21:53:48.645823       6 log.go:172] (0xc002717720) (3) Data frame handling
I0720 21:53:48.645847       6 log.go:172] (0xc00296b360) (5) Data frame handling
I0720 21:53:48.647293       6 log.go:172] (0xc001bf8790) Data frame received for 1
I0720 21:53:48.647321       6 log.go:172] (0xc00291d680) (1) Data frame handling
I0720 21:53:48.647340       6 log.go:172] (0xc00291d680) (1) Data frame sent
I0720 21:53:48.647363       6 log.go:172] (0xc001bf8790) (0xc00291d680) Stream removed, broadcasting: 1
I0720 21:53:48.647397       6 log.go:172] (0xc001bf8790) Go away received
I0720 21:53:48.647552       6 log.go:172] (0xc001bf8790) (0xc00291d680) Stream removed, broadcasting: 1
I0720 21:53:48.647585       6 log.go:172] (0xc001bf8790) (0xc002717720) Stream removed, broadcasting: 3
I0720 21:53:48.647608       6 log.go:172] (0xc001bf8790) (0xc00296b360) Stream removed, broadcasting: 5
Jul 20 21:53:48.647: INFO: Exec stderr: ""
Jul 20 21:53:48.647: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9637 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:53:48.647: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:53:48.680285       6 log.go:172] (0xc001bf8dc0) (0xc00291d860) Create stream
I0720 21:53:48.680311       6 log.go:172] (0xc001bf8dc0) (0xc00291d860) Stream added, broadcasting: 1
I0720 21:53:48.682836       6 log.go:172] (0xc001bf8dc0) Reply frame received for 1
I0720 21:53:48.682872       6 log.go:172] (0xc001bf8dc0) (0xc0027154a0) Create stream
I0720 21:53:48.682885       6 log.go:172] (0xc001bf8dc0) (0xc0027154a0) Stream added, broadcasting: 3
I0720 21:53:48.683744       6 log.go:172] (0xc001bf8dc0) Reply frame received for 3
I0720 21:53:48.683782       6 log.go:172] (0xc001bf8dc0) (0xc0027155e0) Create stream
I0720 21:53:48.683793       6 log.go:172] (0xc001bf8dc0) (0xc0027155e0) Stream added, broadcasting: 5
I0720 21:53:48.684686       6 log.go:172] (0xc001bf8dc0) Reply frame received for 5
I0720 21:53:48.751427       6 log.go:172] (0xc001bf8dc0) Data frame received for 5
I0720 21:53:48.751468       6 log.go:172] (0xc0027155e0) (5) Data frame handling
I0720 21:53:48.751502       6 log.go:172] (0xc001bf8dc0) Data frame received for 3
I0720 21:53:48.751519       6 log.go:172] (0xc0027154a0) (3) Data frame handling
I0720 21:53:48.751537       6 log.go:172] (0xc0027154a0) (3) Data frame sent
I0720 21:53:48.751551       6 log.go:172] (0xc001bf8dc0) Data frame received for 3
I0720 21:53:48.751570       6 log.go:172] (0xc0027154a0) (3) Data frame handling
I0720 21:53:48.752709       6 log.go:172] (0xc001bf8dc0) Data frame received for 1
I0720 21:53:48.752943       6 log.go:172] (0xc00291d860) (1) Data frame handling
I0720 21:53:48.752995       6 log.go:172] (0xc00291d860) (1) Data frame sent
I0720 21:53:48.753032       6 log.go:172] (0xc001bf8dc0) (0xc00291d860) Stream removed, broadcasting: 1
I0720 21:53:48.753049       6 log.go:172] (0xc001bf8dc0) Go away received
I0720 21:53:48.753227       6 log.go:172] (0xc001bf8dc0) (0xc00291d860) Stream removed, broadcasting: 1
I0720 21:53:48.753265       6 log.go:172] (0xc001bf8dc0) (0xc0027154a0) Stream removed, broadcasting: 3
I0720 21:53:48.753292       6 log.go:172] (0xc001bf8dc0) (0xc0027155e0) Stream removed, broadcasting: 5
Jul 20 21:53:48.753: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:53:48.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9637" for this suite.

• [SLOW TEST:11.233 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:53:48.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9560
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-9560
I0720 21:53:48.947339       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9560, replica count: 2
I0720 21:53:51.997791       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 21:53:54.998050       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 21:53:54.998: INFO: Creating new exec pod
Jul 20 21:54:00.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9560 execpod7k6xq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 20 21:54:00.223: INFO: stderr: "I0720 21:54:00.151974    3517 log.go:172] (0xc000912f20) (0xc00095a320) Create stream\nI0720 21:54:00.152046    3517 log.go:172] (0xc000912f20) (0xc00095a320) Stream added, broadcasting: 1\nI0720 21:54:00.156648    3517 log.go:172] (0xc000912f20) Reply frame received for 1\nI0720 21:54:00.156693    3517 log.go:172] (0xc000912f20) (0xc0006fdb80) Create stream\nI0720 21:54:00.156710    3517 log.go:172] (0xc000912f20) (0xc0006fdb80) Stream added, broadcasting: 3\nI0720 21:54:00.157668    3517 log.go:172] (0xc000912f20) Reply frame received for 3\nI0720 21:54:00.157697    3517 log.go:172] (0xc000912f20) (0xc0006ae780) Create stream\nI0720 21:54:00.157707    3517 log.go:172] (0xc000912f20) (0xc0006ae780) Stream added, broadcasting: 5\nI0720 21:54:00.158701    3517 log.go:172] (0xc000912f20) Reply frame received for 5\nI0720 21:54:00.216083    3517 log.go:172] (0xc000912f20) Data frame received for 5\nI0720 21:54:00.216133    3517 log.go:172] (0xc0006ae780) (5) Data frame handling\nI0720 21:54:00.216161    3517 log.go:172] (0xc0006ae780) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0720 21:54:00.216319    3517 log.go:172] (0xc000912f20) Data frame received for 5\nI0720 21:54:00.216337    3517 log.go:172] (0xc0006ae780) (5) Data frame handling\nI0720 21:54:00.216354    3517 log.go:172] (0xc0006ae780) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 21:54:00.216853    3517 log.go:172] (0xc000912f20) Data frame received for 3\nI0720 21:54:00.216892    3517 log.go:172] (0xc0006fdb80) (3) Data frame handling\nI0720 21:54:00.217024    3517 log.go:172] (0xc000912f20) Data frame received for 5\nI0720 21:54:00.217055    3517 log.go:172] (0xc0006ae780) (5) Data frame handling\nI0720 21:54:00.219021    3517 log.go:172] (0xc000912f20) Data frame received for 1\nI0720 21:54:00.219050    3517 log.go:172] (0xc00095a320) (1) Data frame handling\nI0720 21:54:00.219071    3517 log.go:172] (0xc00095a320) (1) Data frame sent\nI0720 21:54:00.219096    3517 log.go:172] (0xc000912f20) (0xc00095a320) Stream removed, broadcasting: 1\nI0720 21:54:00.219148    3517 log.go:172] (0xc000912f20) Go away received\nI0720 21:54:00.219405    3517 log.go:172] (0xc000912f20) (0xc00095a320) Stream removed, broadcasting: 1\nI0720 21:54:00.219419    3517 log.go:172] (0xc000912f20) (0xc0006fdb80) Stream removed, broadcasting: 3\nI0720 21:54:00.219425    3517 log.go:172] (0xc000912f20) (0xc0006ae780) Stream removed, broadcasting: 5\n"
Jul 20 21:54:00.223: INFO: stdout: ""
Jul 20 21:54:00.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9560 execpod7k6xq -- /bin/sh -x -c nc -zv -t -w 2 10.103.215.178 80'
Jul 20 21:54:00.432: INFO: stderr: "I0720 21:54:00.359945    3537 log.go:172] (0xc000109600) (0xc000617b80) Create stream\nI0720 21:54:00.360007    3537 log.go:172] (0xc000109600) (0xc000617b80) Stream added, broadcasting: 1\nI0720 21:54:00.363033    3537 log.go:172] (0xc000109600) Reply frame received for 1\nI0720 21:54:00.363080    3537 log.go:172] (0xc000109600) (0xc000972000) Create stream\nI0720 21:54:00.363096    3537 log.go:172] (0xc000109600) (0xc000972000) Stream added, broadcasting: 3\nI0720 21:54:00.364078    3537 log.go:172] (0xc000109600) Reply frame received for 3\nI0720 21:54:00.364115    3537 log.go:172] (0xc000109600) (0xc000456000) Create stream\nI0720 21:54:00.364131    3537 log.go:172] (0xc000109600) (0xc000456000) Stream added, broadcasting: 5\nI0720 21:54:00.365006    3537 log.go:172] (0xc000109600) Reply frame received for 5\nI0720 21:54:00.425904    3537 log.go:172] (0xc000109600) Data frame received for 5\nI0720 21:54:00.425940    3537 log.go:172] (0xc000456000) (5) Data frame handling\nI0720 21:54:00.425954    3537 log.go:172] (0xc000456000) (5) Data frame sent\nI0720 21:54:00.425960    3537 log.go:172] (0xc000109600) Data frame received for 5\nI0720 21:54:00.425964    3537 log.go:172] (0xc000456000) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.215.178 80\nConnection to 10.103.215.178 80 port [tcp/http] succeeded!\nI0720 21:54:00.425990    3537 log.go:172] (0xc000109600) Data frame received for 3\nI0720 21:54:00.425995    3537 log.go:172] (0xc000972000) (3) Data frame handling\nI0720 21:54:00.427405    3537 log.go:172] (0xc000109600) Data frame received for 1\nI0720 21:54:00.427424    3537 log.go:172] (0xc000617b80) (1) Data frame handling\nI0720 21:54:00.427441    3537 log.go:172] (0xc000617b80) (1) Data frame sent\nI0720 21:54:00.427451    3537 log.go:172] (0xc000109600) (0xc000617b80) Stream removed, broadcasting: 1\nI0720 21:54:00.427471    3537 log.go:172] (0xc000109600) Go away received\nI0720 21:54:00.427880    3537 log.go:172] (0xc000109600) (0xc000617b80) Stream removed, broadcasting: 1\nI0720 21:54:00.427900    3537 log.go:172] (0xc000109600) (0xc000972000) Stream removed, broadcasting: 3\nI0720 21:54:00.427909    3537 log.go:172] (0xc000109600) (0xc000456000) Stream removed, broadcasting: 5\n"
Jul 20 21:54:00.432: INFO: stdout: ""
Jul 20 21:54:00.432: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:54:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9560" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:11.701 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":223,"skipped":3589,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:54:00.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:54:00.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 20 21:54:03.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4669 create -f -'
Jul 20 21:54:07.713: INFO: stderr: ""
Jul 20 21:54:07.713: INFO: stdout: "e2e-test-crd-publish-openapi-3774-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 20 21:54:07.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4669 delete e2e-test-crd-publish-openapi-3774-crds test-cr'
Jul 20 21:54:07.805: INFO: stderr: ""
Jul 20 21:54:07.805: INFO: stdout: "e2e-test-crd-publish-openapi-3774-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul 20 21:54:07.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4669 apply -f -'
Jul 20 21:54:08.029: INFO: stderr: ""
Jul 20 21:54:08.029: INFO: stdout: "e2e-test-crd-publish-openapi-3774-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 20 21:54:08.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4669 delete e2e-test-crd-publish-openapi-3774-crds test-cr'
Jul 20 21:54:08.170: INFO: stderr: ""
Jul 20 21:54:08.170: INFO: stdout: "e2e-test-crd-publish-openapi-3774-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 20 21:54:08.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3774-crds'
Jul 20 21:54:08.410: INFO: stderr: ""
Jul 20 21:54:08.410: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3774-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:54:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4669" for this suite.

• [SLOW TEST:9.885 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":224,"skipped":3590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:54:10.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Jul 20 21:54:10.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1226'
Jul 20 21:54:10.686: INFO: stderr: ""
Jul 20 21:54:10.686: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:54:10.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1226'
Jul 20 21:54:10.811: INFO: stderr: ""
Jul 20 21:54:10.811: INFO: stdout: "update-demo-nautilus-8xkmt update-demo-nautilus-sqnp4 "
Jul 20 21:54:10.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xkmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:10.933: INFO: stderr: ""
Jul 20 21:54:10.933: INFO: stdout: ""
Jul 20 21:54:10.933: INFO: update-demo-nautilus-8xkmt is created but not running
Jul 20 21:54:15.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1226'
Jul 20 21:54:16.022: INFO: stderr: ""
Jul 20 21:54:16.022: INFO: stdout: "update-demo-nautilus-8xkmt update-demo-nautilus-sqnp4 "
Jul 20 21:54:16.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xkmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:16.111: INFO: stderr: ""
Jul 20 21:54:16.111: INFO: stdout: "true"
Jul 20 21:54:16.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8xkmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:16.198: INFO: stderr: ""
Jul 20 21:54:16.198: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:54:16.198: INFO: validating pod update-demo-nautilus-8xkmt
Jul 20 21:54:16.202: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:54:16.202: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:54:16.202: INFO: update-demo-nautilus-8xkmt is verified up and running
Jul 20 21:54:16.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqnp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:16.292: INFO: stderr: ""
Jul 20 21:54:16.292: INFO: stdout: "true"
Jul 20 21:54:16.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sqnp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:16.376: INFO: stderr: ""
Jul 20 21:54:16.376: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 20 21:54:16.376: INFO: validating pod update-demo-nautilus-sqnp4
Jul 20 21:54:16.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 20 21:54:16.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 20 21:54:16.398: INFO: update-demo-nautilus-sqnp4 is verified up and running
STEP: rolling-update to new replication controller
Jul 20 21:54:16.401: INFO: scanned /root for discovery docs: 
Jul 20 21:54:16.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1226'
Jul 20 21:54:39.904: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 20 21:54:39.904: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 20 21:54:39.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1226'
Jul 20 21:54:40.004: INFO: stderr: ""
Jul 20 21:54:40.004: INFO: stdout: "update-demo-kitten-gk6hs update-demo-kitten-wf54m "
Jul 20 21:54:40.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gk6hs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:40.088: INFO: stderr: ""
Jul 20 21:54:40.088: INFO: stdout: "true"
Jul 20 21:54:40.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gk6hs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:40.185: INFO: stderr: ""
Jul 20 21:54:40.185: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 20 21:54:40.185: INFO: validating pod update-demo-kitten-gk6hs
Jul 20 21:54:40.189: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 20 21:54:40.189: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 20 21:54:40.189: INFO: update-demo-kitten-gk6hs is verified up and running
Jul 20 21:54:40.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wf54m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:40.272: INFO: stderr: ""
Jul 20 21:54:40.272: INFO: stdout: "true"
Jul 20 21:54:40.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wf54m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1226'
Jul 20 21:54:40.366: INFO: stderr: ""
Jul 20 21:54:40.366: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 20 21:54:40.366: INFO: validating pod update-demo-kitten-wf54m
Jul 20 21:54:40.370: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 20 21:54:40.370: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 20 21:54:40.370: INFO: update-demo-kitten-wf54m is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:54:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1226" for this suite.

• [SLOW TEST:30.028 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":225,"skipped":3612,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:54:40.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:54:40.495: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 20 21:54:40.510: INFO: Number of nodes with available pods: 0
Jul 20 21:54:40.510: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 20 21:54:40.545: INFO: Number of nodes with available pods: 0
Jul 20 21:54:40.546: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:41.549: INFO: Number of nodes with available pods: 0
Jul 20 21:54:41.549: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:42.550: INFO: Number of nodes with available pods: 0
Jul 20 21:54:42.550: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:43.549: INFO: Number of nodes with available pods: 0
Jul 20 21:54:43.549: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:44.550: INFO: Number of nodes with available pods: 1
Jul 20 21:54:44.550: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 20 21:54:44.619: INFO: Number of nodes with available pods: 1
Jul 20 21:54:44.619: INFO: Number of running nodes: 0, number of available pods: 1
Jul 20 21:54:45.634: INFO: Number of nodes with available pods: 0
Jul 20 21:54:45.634: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 20 21:54:45.667: INFO: Number of nodes with available pods: 0
Jul 20 21:54:45.667: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:46.672: INFO: Number of nodes with available pods: 0
Jul 20 21:54:46.672: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:47.716: INFO: Number of nodes with available pods: 0
Jul 20 21:54:47.716: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:48.675: INFO: Number of nodes with available pods: 0
Jul 20 21:54:48.675: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:49.672: INFO: Number of nodes with available pods: 0
Jul 20 21:54:49.672: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:50.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:50.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:51.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:51.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:52.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:52.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:53.670: INFO: Number of nodes with available pods: 0
Jul 20 21:54:53.670: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:54.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:54.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:55.672: INFO: Number of nodes with available pods: 0
Jul 20 21:54:55.672: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:56.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:56.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:57.679: INFO: Number of nodes with available pods: 0
Jul 20 21:54:57.679: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:58.671: INFO: Number of nodes with available pods: 0
Jul 20 21:54:58.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:54:59.704: INFO: Number of nodes with available pods: 0
Jul 20 21:54:59.704: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:55:00.671: INFO: Number of nodes with available pods: 0
Jul 20 21:55:00.671: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:55:01.672: INFO: Number of nodes with available pods: 1
Jul 20 21:55:01.672: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9463, will wait for the garbage collector to delete the pods
Jul 20 21:55:01.738: INFO: Deleting DaemonSet.extensions daemon-set took: 6.609472ms
Jul 20 21:55:01.838: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.325796ms
Jul 20 21:55:07.553: INFO: Number of nodes with available pods: 0
Jul 20 21:55:07.553: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 21:55:07.556: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9463/daemonsets","resourceVersion":"2881595"},"items":null}

Jul 20 21:55:07.558: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9463/pods","resourceVersion":"2881595"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:55:07.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9463" for this suite.

• [SLOW TEST:27.221 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":226,"skipped":3619,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:55:07.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-2966
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 21:55:07.645: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 21:55:27.859: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.182 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:55:27.859: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:55:27.881671       6 log.go:172] (0xc00145f600) (0xc00278d0e0) Create stream
I0720 21:55:27.881699       6 log.go:172] (0xc00145f600) (0xc00278d0e0) Stream added, broadcasting: 1
I0720 21:55:27.883611       6 log.go:172] (0xc00145f600) Reply frame received for 1
I0720 21:55:27.883657       6 log.go:172] (0xc00145f600) (0xc00291df40) Create stream
I0720 21:55:27.883678       6 log.go:172] (0xc00145f600) (0xc00291df40) Stream added, broadcasting: 3
I0720 21:55:27.884685       6 log.go:172] (0xc00145f600) Reply frame received for 3
I0720 21:55:27.884792       6 log.go:172] (0xc00145f600) (0xc001864a00) Create stream
I0720 21:55:27.884805       6 log.go:172] (0xc00145f600) (0xc001864a00) Stream added, broadcasting: 5
I0720 21:55:27.885652       6 log.go:172] (0xc00145f600) Reply frame received for 5
I0720 21:55:28.957826       6 log.go:172] (0xc00145f600) Data frame received for 5
I0720 21:55:28.957877       6 log.go:172] (0xc001864a00) (5) Data frame handling
I0720 21:55:28.957931       6 log.go:172] (0xc00145f600) Data frame received for 3
I0720 21:55:28.957954       6 log.go:172] (0xc00291df40) (3) Data frame handling
I0720 21:55:28.957992       6 log.go:172] (0xc00291df40) (3) Data frame sent
I0720 21:55:28.958015       6 log.go:172] (0xc00145f600) Data frame received for 3
I0720 21:55:28.958035       6 log.go:172] (0xc00291df40) (3) Data frame handling
I0720 21:55:28.960431       6 log.go:172] (0xc00145f600) Data frame received for 1
I0720 21:55:28.960518       6 log.go:172] (0xc00278d0e0) (1) Data frame handling
I0720 21:55:28.960554       6 log.go:172] (0xc00278d0e0) (1) Data frame sent
I0720 21:55:28.960572       6 log.go:172] (0xc00145f600) (0xc00278d0e0) Stream removed, broadcasting: 1
I0720 21:55:28.960592       6 log.go:172] (0xc00145f600) Go away received
I0720 21:55:28.960696       6 log.go:172] (0xc00145f600) (0xc00278d0e0) Stream removed, broadcasting: 1
I0720 21:55:28.960842       6 log.go:172] (0xc00145f600) (0xc00291df40) Stream removed, broadcasting: 3
I0720 21:55:28.960893       6 log.go:172] (0xc00145f600) (0xc001864a00) Stream removed, broadcasting: 5
Jul 20 21:55:28.960: INFO: Found all expected endpoints: [netserver-0]
Jul 20 21:55:28.964: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.138 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:55:28.965: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:55:28.998866       6 log.go:172] (0xc0015a86e0) (0xc001864c80) Create stream
I0720 21:55:28.998899       6 log.go:172] (0xc0015a86e0) (0xc001864c80) Stream added, broadcasting: 1
I0720 21:55:29.001653       6 log.go:172] (0xc0015a86e0) Reply frame received for 1
I0720 21:55:29.001705       6 log.go:172] (0xc0015a86e0) (0xc00278d220) Create stream
I0720 21:55:29.001723       6 log.go:172] (0xc0015a86e0) (0xc00278d220) Stream added, broadcasting: 3
I0720 21:55:29.002984       6 log.go:172] (0xc0015a86e0) Reply frame received for 3
I0720 21:55:29.003029       6 log.go:172] (0xc0015a86e0) (0xc002630000) Create stream
I0720 21:55:29.003045       6 log.go:172] (0xc0015a86e0) (0xc002630000) Stream added, broadcasting: 5
I0720 21:55:29.004196       6 log.go:172] (0xc0015a86e0) Reply frame received for 5
I0720 21:55:30.087671       6 log.go:172] (0xc0015a86e0) Data frame received for 3
I0720 21:55:30.087705       6 log.go:172] (0xc00278d220) (3) Data frame handling
I0720 21:55:30.087714       6 log.go:172] (0xc00278d220) (3) Data frame sent
I0720 21:55:30.087723       6 log.go:172] (0xc0015a86e0) Data frame received for 3
I0720 21:55:30.087732       6 log.go:172] (0xc00278d220) (3) Data frame handling
I0720 21:55:30.087750       6 log.go:172] (0xc0015a86e0) Data frame received for 5
I0720 21:55:30.087759       6 log.go:172] (0xc002630000) (5) Data frame handling
I0720 21:55:30.089439       6 log.go:172] (0xc0015a86e0) Data frame received for 1
I0720 21:55:30.089458       6 log.go:172] (0xc001864c80) (1) Data frame handling
I0720 21:55:30.089466       6 log.go:172] (0xc001864c80) (1) Data frame sent
I0720 21:55:30.089475       6 log.go:172] (0xc0015a86e0) (0xc001864c80) Stream removed, broadcasting: 1
I0720 21:55:30.089513       6 log.go:172] (0xc0015a86e0) Go away received
I0720 21:55:30.089543       6 log.go:172] (0xc0015a86e0) (0xc001864c80) Stream removed, broadcasting: 1
I0720 21:55:30.089558       6 log.go:172] (0xc0015a86e0) (0xc00278d220) Stream removed, broadcasting: 3
I0720 21:55:30.089571       6 log.go:172] (0xc0015a86e0) (0xc002630000) Stream removed, broadcasting: 5
Jul 20 21:55:30.089: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:55:30.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2966" for this suite.

• [SLOW TEST:22.505 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3666,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:55:30.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 20 21:55:30.237: INFO: Waiting up to 5m0s for pod "pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d" in namespace "emptydir-4244" to be "success or failure"
Jul 20 21:55:30.269: INFO: Pod "pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.806322ms
Jul 20 21:55:32.321: INFO: Pod "pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083784529s
Jul 20 21:55:34.325: INFO: Pod "pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088064365s
STEP: Saw pod success
Jul 20 21:55:34.325: INFO: Pod "pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d" satisfied condition "success or failure"
Jul 20 21:55:34.328: INFO: Trying to get logs from node jerma-worker pod pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d container test-container: 
STEP: delete the pod
Jul 20 21:55:34.364: INFO: Waiting for pod pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d to disappear
Jul 20 21:55:34.368: INFO: Pod pod-c4e15b53-72ca-48dc-89c1-6890e343ec3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:55:34.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4244" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3668,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:55:34.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-a005e2dc-c182-4ee8-9c02-d8eb13954add in namespace container-probe-8687
Jul 20 21:55:40.513: INFO: Started pod busybox-a005e2dc-c182-4ee8-9c02-d8eb13954add in namespace container-probe-8687
STEP: checking the pod's current state and verifying that restartCount is present
Jul 20 21:55:40.515: INFO: Initial restart count of pod busybox-a005e2dc-c182-4ee8-9c02-d8eb13954add is 0
Jul 20 21:56:32.673: INFO: Restart count of pod container-probe-8687/busybox-a005e2dc-c182-4ee8-9c02-d8eb13954add is now 1 (52.157973351s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:56:32.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8687" for this suite.

• [SLOW TEST:58.338 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:56:32.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3876
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 20 21:56:32.803: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 20 21:57:00.948: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.185:8080/dial?request=hostname&protocol=udp&host=10.244.2.184&port=8081&tries=1'] Namespace:pod-network-test-3876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:57:00.948: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:57:00.981900       6 log.go:172] (0xc001e620b0) (0xc00291c3c0) Create stream
I0720 21:57:00.981942       6 log.go:172] (0xc001e620b0) (0xc00291c3c0) Stream added, broadcasting: 1
I0720 21:57:00.983697       6 log.go:172] (0xc001e620b0) Reply frame received for 1
I0720 21:57:00.983744       6 log.go:172] (0xc001e620b0) (0xc001df9360) Create stream
I0720 21:57:00.983765       6 log.go:172] (0xc001e620b0) (0xc001df9360) Stream added, broadcasting: 3
I0720 21:57:00.984694       6 log.go:172] (0xc001e620b0) Reply frame received for 3
I0720 21:57:00.984853       6 log.go:172] (0xc001e620b0) (0xc001df97c0) Create stream
I0720 21:57:00.984882       6 log.go:172] (0xc001e620b0) (0xc001df97c0) Stream added, broadcasting: 5
I0720 21:57:00.985683       6 log.go:172] (0xc001e620b0) Reply frame received for 5
I0720 21:57:01.061970       6 log.go:172] (0xc001e620b0) Data frame received for 3
I0720 21:57:01.062002       6 log.go:172] (0xc001df9360) (3) Data frame handling
I0720 21:57:01.062022       6 log.go:172] (0xc001df9360) (3) Data frame sent
I0720 21:57:01.062752       6 log.go:172] (0xc001e620b0) Data frame received for 3
I0720 21:57:01.062772       6 log.go:172] (0xc001df9360) (3) Data frame handling
I0720 21:57:01.062792       6 log.go:172] (0xc001e620b0) Data frame received for 5
I0720 21:57:01.062808       6 log.go:172] (0xc001df97c0) (5) Data frame handling
I0720 21:57:01.064470       6 log.go:172] (0xc001e620b0) Data frame received for 1
I0720 21:57:01.064486       6 log.go:172] (0xc00291c3c0) (1) Data frame handling
I0720 21:57:01.064498       6 log.go:172] (0xc00291c3c0) (1) Data frame sent
I0720 21:57:01.064512       6 log.go:172] (0xc001e620b0) (0xc00291c3c0) Stream removed, broadcasting: 1
I0720 21:57:01.064556       6 log.go:172] (0xc001e620b0) Go away received
I0720 21:57:01.064581       6 log.go:172] (0xc001e620b0) (0xc00291c3c0) Stream removed, broadcasting: 1
I0720 21:57:01.064591       6 log.go:172] (0xc001e620b0) (0xc001df9360) Stream removed, broadcasting: 3
I0720 21:57:01.064600       6 log.go:172] (0xc001e620b0) (0xc001df97c0) Stream removed, broadcasting: 5
Jul 20 21:57:01.064: INFO: Waiting for responses: map[]
Jul 20 21:57:01.068: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.185:8080/dial?request=hostname&protocol=udp&host=10.244.1.141&port=8081&tries=1'] Namespace:pod-network-test-3876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 20 21:57:01.068: INFO: >>> kubeConfig: /root/.kube/config
I0720 21:57:01.098738       6 log.go:172] (0xc001dec000) (0xc002631220) Create stream
I0720 21:57:01.098762       6 log.go:172] (0xc001dec000) (0xc002631220) Stream added, broadcasting: 1
I0720 21:57:01.100562       6 log.go:172] (0xc001dec000) Reply frame received for 1
I0720 21:57:01.100597       6 log.go:172] (0xc001dec000) (0xc00291c460) Create stream
I0720 21:57:01.100608       6 log.go:172] (0xc001dec000) (0xc00291c460) Stream added, broadcasting: 3
I0720 21:57:01.101785       6 log.go:172] (0xc001dec000) Reply frame received for 3
I0720 21:57:01.101842       6 log.go:172] (0xc001dec000) (0xc0019a6000) Create stream
I0720 21:57:01.101862       6 log.go:172] (0xc001dec000) (0xc0019a6000) Stream added, broadcasting: 5
I0720 21:57:01.102936       6 log.go:172] (0xc001dec000) Reply frame received for 5
I0720 21:57:01.171323       6 log.go:172] (0xc001dec000) Data frame received for 3
I0720 21:57:01.171353       6 log.go:172] (0xc00291c460) (3) Data frame handling
I0720 21:57:01.171373       6 log.go:172] (0xc00291c460) (3) Data frame sent
I0720 21:57:01.172243       6 log.go:172] (0xc001dec000) Data frame received for 3
I0720 21:57:01.172290       6 log.go:172] (0xc00291c460) (3) Data frame handling
I0720 21:57:01.172331       6 log.go:172] (0xc001dec000) Data frame received for 5
I0720 21:57:01.172353       6 log.go:172] (0xc0019a6000) (5) Data frame handling
I0720 21:57:01.174400       6 log.go:172] (0xc001dec000) Data frame received for 1
I0720 21:57:01.174427       6 log.go:172] (0xc002631220) (1) Data frame handling
I0720 21:57:01.174464       6 log.go:172] (0xc002631220) (1) Data frame sent
I0720 21:57:01.174594       6 log.go:172] (0xc001dec000) (0xc002631220) Stream removed, broadcasting: 1
I0720 21:57:01.174700       6 log.go:172] (0xc001dec000) (0xc002631220) Stream removed, broadcasting: 1
I0720 21:57:01.174731       6 log.go:172] (0xc001dec000) (0xc00291c460) Stream removed, broadcasting: 3
I0720 21:57:01.174940       6 log.go:172] (0xc001dec000) (0xc0019a6000) Stream removed, broadcasting: 5
Jul 20 21:57:01.174: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
I0720 21:57:01.175049       6 log.go:172] (0xc001dec000) Go away received
Jul 20 21:57:01.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3876" for this suite.

• [SLOW TEST:28.469 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3718,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:01.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:57:05.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9003" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3728,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:05.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 20 21:57:05.431: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:05.466: INFO: Number of nodes with available pods: 0
Jul 20 21:57:05.466: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:06.599: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:06.616: INFO: Number of nodes with available pods: 0
Jul 20 21:57:06.616: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:07.503: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:07.506: INFO: Number of nodes with available pods: 0
Jul 20 21:57:07.506: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:08.586: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:08.589: INFO: Number of nodes with available pods: 0
Jul 20 21:57:08.589: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:09.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:09.473: INFO: Number of nodes with available pods: 0
Jul 20 21:57:09.473: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:10.471: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:10.474: INFO: Number of nodes with available pods: 2
Jul 20 21:57:10.474: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul 20 21:57:10.499: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:10.522: INFO: Number of nodes with available pods: 1
Jul 20 21:57:10.522: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:11.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:11.531: INFO: Number of nodes with available pods: 1
Jul 20 21:57:11.531: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:12.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:12.531: INFO: Number of nodes with available pods: 1
Jul 20 21:57:12.531: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:13.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:13.532: INFO: Number of nodes with available pods: 1
Jul 20 21:57:13.532: INFO: Node jerma-worker is running more than one daemon pod
Jul 20 21:57:14.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 20 21:57:14.532: INFO: Number of nodes with available pods: 2
Jul 20 21:57:14.532: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3186, will wait for the garbage collector to delete the pods
Jul 20 21:57:14.596: INFO: Deleting DaemonSet.extensions daemon-set took: 6.028951ms
Jul 20 21:57:14.897: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.339053ms
Jul 20 21:57:27.621: INFO: Number of nodes with available pods: 0
Jul 20 21:57:27.621: INFO: Number of running nodes: 0, number of available pods: 0
Jul 20 21:57:27.625: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3186/daemonsets","resourceVersion":"2882292"},"items":null}

Jul 20 21:57:27.629: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3186/pods","resourceVersion":"2882292"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:57:27.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3186" for this suite.

• [SLOW TEST:22.333 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":232,"skipped":3734,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:27.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-87f88d02-810a-46c8-bd21-339ab8b4e3ad
STEP: Creating configMap with name cm-test-opt-upd-ffc01211-4cba-4498-9cf8-e744085cb064
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-87f88d02-810a-46c8-bd21-339ab8b4e3ad
STEP: Updating configmap cm-test-opt-upd-ffc01211-4cba-4498-9cf8-e744085cb064
STEP: Creating configMap with name cm-test-opt-create-8ebd33b0-d081-4c5c-aafe-70f11d0f4714
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:57:37.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3154" for this suite.

• [SLOW TEST:10.295 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3749,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:37.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:57:37.997: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:57:39.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5881" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":234,"skipped":3756,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:39.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 21:57:39.456: INFO: Creating ReplicaSet my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba
Jul 20 21:57:39.481: INFO: Pod name my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba: Found 0 pods out of 1
Jul 20 21:57:44.498: INFO: Pod name my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba: Found 1 pods out of 1
Jul 20 21:57:44.498: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba" is running
Jul 20 21:57:44.501: INFO: Pod "my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba-hdwnn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:57:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:57:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:57:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:57:39 +0000 UTC Reason: Message:}])
Jul 20 21:57:44.501: INFO: Trying to dial the pod
Jul 20 21:57:49.513: INFO: Controller my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba: Got expected result from replica 1 [my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba-hdwnn]: "my-hostname-basic-1eb625ba-983e-4ce0-8a28-cce3c50430ba-hdwnn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:57:49.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4604" for this suite.

• [SLOW TEST:10.154 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":235,"skipped":3758,"failed":0}
S
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:57:49.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3179, will wait for the garbage collector to delete the pods
Jul 20 21:57:53.645: INFO: Deleting Job.batch foo took: 6.185713ms
Jul 20 21:57:53.945: INFO: Terminating Job.batch foo pods took: 300.192468ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:58:37.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3179" for this suite.

• [SLOW TEST:48.163 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":236,"skipped":3759,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:58:37.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-add86871-2e31-4218-9ae0-e9c77902c30d
STEP: Creating a pod to test consume configMaps
Jul 20 21:58:37.764: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b" in namespace "configmap-4010" to be "success or failure"
Jul 20 21:58:37.768: INFO: Pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.746711ms
Jul 20 21:58:39.780: INFO: Pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015799147s
Jul 20 21:58:41.784: INFO: Pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019520467s
Jul 20 21:58:43.788: INFO: Pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023308062s
STEP: Saw pod success
Jul 20 21:58:43.788: INFO: Pod "pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b" satisfied condition "success or failure"
Jul 20 21:58:43.790: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b container configmap-volume-test: 
STEP: delete the pod
Jul 20 21:58:43.841: INFO: Waiting for pod pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b to disappear
Jul 20 21:58:43.884: INFO: Pod pod-configmaps-2d6615c0-ac2f-44f7-838b-c955a728cf9b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:58:43.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4010" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3761,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:58:43.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 20 21:58:43.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3110'
Jul 20 21:58:44.070: INFO: stderr: ""
Jul 20 21:58:44.070: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759
Jul 20 21:58:44.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3110'
Jul 20 21:58:57.383: INFO: stderr: ""
Jul 20 21:58:57.383: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:58:57.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3110" for this suite.

• [SLOW TEST:13.504 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":238,"skipped":3776,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:58:57.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Jul 20 21:58:57.469: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:05.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5060" for this suite.

• [SLOW TEST:7.667 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":239,"skipped":3892,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:05.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-5e80390f-1773-4ca2-a304-2cc5fa52fc97
STEP: Creating a pod to test consume configMaps
Jul 20 21:59:05.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e" in namespace "projected-5887" to be "success or failure"
Jul 20 21:59:05.140: INFO: Pod "pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.960245ms
Jul 20 21:59:07.144: INFO: Pod "pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007974933s
Jul 20 21:59:09.148: INFO: Pod "pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012056603s
STEP: Saw pod success
Jul 20 21:59:09.148: INFO: Pod "pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e" satisfied condition "success or failure"
Jul 20 21:59:09.151: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 21:59:09.178: INFO: Waiting for pod pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e to disappear
Jul 20 21:59:09.206: INFO: Pod pod-projected-configmaps-7c11ba6d-f3e5-4d15-8c09-ebaab9fba49e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:09.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5887" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3899,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:09.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating cluster-info
Jul 20 21:59:09.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 20 21:59:09.413: INFO: stderr: ""
Jul 20 21:59:09.413: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45705\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45705/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:09.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2877" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":241,"skipped":3901,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:09.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-dd5f99ea-f39e-471a-b24c-095656a1674e
STEP: Creating a pod to test consume configMaps
Jul 20 21:59:09.528: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9" in namespace "projected-2554" to be "success or failure"
Jul 20 21:59:09.536: INFO: Pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.495091ms
Jul 20 21:59:11.539: INFO: Pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010961599s
Jul 20 21:59:13.543: INFO: Pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.015214152s
Jul 20 21:59:15.547: INFO: Pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019177119s
STEP: Saw pod success
Jul 20 21:59:15.547: INFO: Pod "pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9" satisfied condition "success or failure"
Jul 20 21:59:15.551: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 20 21:59:15.574: INFO: Waiting for pod pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9 to disappear
Jul 20 21:59:15.597: INFO: Pod pod-projected-configmaps-6d656f42-eb7a-40cd-92f1-b7db6602f7e9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:15.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2554" for this suite.

• [SLOW TEST:6.186 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3904,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:15.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 20 21:59:15.714: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7" in namespace "projected-6414" to be "success or failure"
Jul 20 21:59:15.729: INFO: Pod "downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.368271ms
Jul 20 21:59:17.733: INFO: Pod "downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019018364s
Jul 20 21:59:19.737: INFO: Pod "downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023174802s
STEP: Saw pod success
Jul 20 21:59:19.737: INFO: Pod "downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7" satisfied condition "success or failure"
Jul 20 21:59:19.740: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7 container client-container: 
STEP: delete the pod
Jul 20 21:59:19.792: INFO: Waiting for pod downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7 to disappear
Jul 20 21:59:19.845: INFO: Pod downwardapi-volume-6b61d294-e00b-4b21-9e9a-ad43e64bbcd7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:19.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6414" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3906,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:19.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5482
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5482
STEP: Creating statefulset with conflicting port in namespace statefulset-5482
STEP: Waiting until pod test-pod will start running in namespace statefulset-5482
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5482
Jul 20 21:59:24.072: INFO: Observed stateful pod in namespace: statefulset-5482, name: ss-0, uid: cec4ffd3-2dac-4eed-8794-85bcfa85e2c0, status phase: Pending. Waiting for statefulset controller to delete.
Jul 20 21:59:24.239: INFO: Observed stateful pod in namespace: statefulset-5482, name: ss-0, uid: cec4ffd3-2dac-4eed-8794-85bcfa85e2c0, status phase: Failed. Waiting for statefulset controller to delete.
Jul 20 21:59:24.255: INFO: Observed stateful pod in namespace: statefulset-5482, name: ss-0, uid: cec4ffd3-2dac-4eed-8794-85bcfa85e2c0, status phase: Failed. Waiting for statefulset controller to delete.
Jul 20 21:59:24.279: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5482
STEP: Removing pod with conflicting port in namespace statefulset-5482
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5482 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 20 21:59:28.378: INFO: Deleting all statefulset in ns statefulset-5482
Jul 20 21:59:28.381: INFO: Scaling statefulset ss to 0
Jul 20 21:59:38.404: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 21:59:38.406: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:38.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5482" for this suite.

• [SLOW TEST:18.574 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":244,"skipped":3945,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:38.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating replication controller my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791
Jul 20 21:59:38.483: INFO: Pod name my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791: Found 0 pods out of 1
Jul 20 21:59:43.492: INFO: Pod name my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791: Found 1 pods out of 1
Jul 20 21:59:43.492: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791" are running
Jul 20 21:59:43.496: INFO: Pod "my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791-vkc8x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:59:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:59:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:59:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-20 21:59:38 +0000 UTC Reason: Message:}])
Jul 20 21:59:43.496: INFO: Trying to dial the pod
Jul 20 21:59:48.507: INFO: Controller my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791: Got expected result from replica 1 [my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791-vkc8x]: "my-hostname-basic-d6ffad9a-2347-45c6-bff9-cdfb810d6791-vkc8x", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 21:59:48.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9679" for this suite.

• [SLOW TEST:10.089 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":245,"skipped":3955,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 21:59:48.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-4074
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Jul 20 21:59:48.638: INFO: Found 0 stateful pods, waiting for 3
Jul 20 21:59:58.643: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:59:58.643: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 21:59:58.643: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jul 20 22:00:08.643: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 22:00:08.643: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 22:00:08.643: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 20 22:00:08.670: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 20 22:00:18.729: INFO: Updating stateful set ss2
Jul 20 22:00:18.763: INFO: Waiting for Pod statefulset-4074/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul 20 22:00:29.361: INFO: Found 2 stateful pods, waiting for 3
Jul 20 22:00:39.365: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 22:00:39.365: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 20 22:00:39.365: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 20 22:00:39.387: INFO: Updating stateful set ss2
Jul 20 22:00:39.425: INFO: Waiting for Pod statefulset-4074/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 22:00:49.449: INFO: Updating stateful set ss2
Jul 20 22:00:49.497: INFO: Waiting for StatefulSet statefulset-4074/ss2 to complete update
Jul 20 22:00:49.497: INFO: Waiting for Pod statefulset-4074/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 20 22:00:59.503: INFO: Waiting for StatefulSet statefulset-4074/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jul 20 22:01:09.505: INFO: Deleting all statefulset in ns statefulset-4074
Jul 20 22:01:09.509: INFO: Scaling statefulset ss2 to 0
Jul 20 22:01:29.526: INFO: Waiting for statefulset status.replicas updated to 0
Jul 20 22:01:29.529: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:29.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4074" for this suite.

• [SLOW TEST:101.037 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":246,"skipped":3981,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:29.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Jul 20 22:01:29.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4282'
Jul 20 22:01:29.972: INFO: stderr: ""
Jul 20 22:01:29.972: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 20 22:01:30.976: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:30.976: INFO: Found 0 / 1
Jul 20 22:01:31.989: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:31.989: INFO: Found 0 / 1
Jul 20 22:01:32.976: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:32.976: INFO: Found 0 / 1
Jul 20 22:01:33.983: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:33.983: INFO: Found 1 / 1
Jul 20 22:01:33.983: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 20 22:01:34.002: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:34.002: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 20 22:01:34.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-wn9hk --namespace=kubectl-4282 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 20 22:01:34.130: INFO: stderr: ""
Jul 20 22:01:34.130: INFO: stdout: "pod/agnhost-master-wn9hk patched\n"
STEP: checking annotations
Jul 20 22:01:34.138: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 20 22:01:34.138: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:34.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4282" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":247,"skipped":4062,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:34.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 22:01:34.962: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 22:01:36.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879295, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 22:01:38.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879295, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879294, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 22:01:42.086: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul 20 22:01:42.109: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:42.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6109" for this suite.
STEP: Destroying namespace "webhook-6109-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.120 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":248,"skipped":4090,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:42.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Jul 20 22:01:42.356: INFO: Waiting up to 5m0s for pod "client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580" in namespace "containers-9875" to be "success or failure"
Jul 20 22:01:42.360: INFO: Pod "client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.502167ms
Jul 20 22:01:44.397: INFO: Pod "client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041791212s
Jul 20 22:01:46.408: INFO: Pod "client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052375397s
STEP: Saw pod success
Jul 20 22:01:46.408: INFO: Pod "client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580" satisfied condition "success or failure"
Jul 20 22:01:46.411: INFO: Trying to get logs from node jerma-worker pod client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580 container test-container: 
STEP: delete the pod
Jul 20 22:01:46.440: INFO: Waiting for pod client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580 to disappear
Jul 20 22:01:46.444: INFO: Pod client-containers-42c54a00-d6da-4770-83ce-7ba0f91dd580 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:46.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9875" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4098,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:46.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-cde8f884-86c8-4c04-a847-0babcb5982f9
STEP: Creating a pod to test consume secrets
Jul 20 22:01:46.548: INFO: Waiting up to 5m0s for pod "pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1" in namespace "secrets-2367" to be "success or failure"
Jul 20 22:01:46.565: INFO: Pod "pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.439956ms
Jul 20 22:01:48.835: INFO: Pod "pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287294342s
Jul 20 22:01:50.883: INFO: Pod "pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.335124839s
STEP: Saw pod success
Jul 20 22:01:50.883: INFO: Pod "pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1" satisfied condition "success or failure"
Jul 20 22:01:50.885: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1 container secret-volume-test: 
STEP: delete the pod
Jul 20 22:01:50.955: INFO: Waiting for pod pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1 to disappear
Jul 20 22:01:50.959: INFO: Pod pod-secrets-7d17cde4-3f88-473d-ac37-c2c48d9bb9d1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:50.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2367" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4115,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:50.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's command
Jul 20 22:01:51.070: INFO: Waiting up to 5m0s for pod "var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a" in namespace "var-expansion-1149" to be "success or failure"
Jul 20 22:01:51.073: INFO: Pod "var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.023904ms
Jul 20 22:01:53.077: INFO: Pod "var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00696319s
Jul 20 22:01:55.081: INFO: Pod "var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010633098s
STEP: Saw pod success
Jul 20 22:01:55.081: INFO: Pod "var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a" satisfied condition "success or failure"
Jul 20 22:01:55.083: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a container dapi-container: 
STEP: delete the pod
Jul 20 22:01:55.104: INFO: Waiting for pod var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a to disappear
Jul 20 22:01:55.125: INFO: Pod var-expansion-a20e8290-21c5-49b4-bd4d-a37db803103a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:01:55.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1149" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4125,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:01:55.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Jul 20 22:01:55.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5386 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 20 22:01:58.942: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0720 22:01:58.847006    4071 log.go:172] (0xc000a98a50) (0xc0004021e0) Create stream\nI0720 22:01:58.847085    4071 log.go:172] (0xc000a98a50) (0xc0004021e0) Stream added, broadcasting: 1\nI0720 22:01:58.849658    4071 log.go:172] (0xc000a98a50) Reply frame received for 1\nI0720 22:01:58.849703    4071 log.go:172] (0xc000a98a50) (0xc000814000) Create stream\nI0720 22:01:58.849717    4071 log.go:172] (0xc000a98a50) (0xc000814000) Stream added, broadcasting: 3\nI0720 22:01:58.850908    4071 log.go:172] (0xc000a98a50) Reply frame received for 3\nI0720 22:01:58.850959    4071 log.go:172] (0xc000a98a50) (0xc0008140a0) Create stream\nI0720 22:01:58.850978    4071 log.go:172] (0xc000a98a50) (0xc0008140a0) Stream added, broadcasting: 5\nI0720 22:01:58.852239    4071 log.go:172] (0xc000a98a50) Reply frame received for 5\nI0720 22:01:58.852307    4071 log.go:172] (0xc000a98a50) (0xc000814140) Create stream\nI0720 22:01:58.852333    4071 log.go:172] (0xc000a98a50) (0xc000814140) Stream added, broadcasting: 7\nI0720 22:01:58.853526    4071 log.go:172] (0xc000a98a50) Reply frame received for 7\nI0720 22:01:58.853710    4071 log.go:172] (0xc000814000) (3) Writing data frame\nI0720 22:01:58.853851    4071 log.go:172] (0xc000814000) (3) Writing data frame\nI0720 22:01:58.854828    4071 log.go:172] (0xc000a98a50) Data frame received for 5\nI0720 22:01:58.854839    4071 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0720 22:01:58.854845    4071 log.go:172] (0xc0008140a0) (5) Data frame sent\nI0720 22:01:58.855501    4071 log.go:172] (0xc000a98a50) Data frame received for 5\nI0720 22:01:58.855512    4071 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0720 22:01:58.855519    4071 log.go:172] (0xc0008140a0) (5) Data frame sent\nI0720 22:01:58.899430    4071 log.go:172] (0xc000a98a50) Data frame received for 5\nI0720 22:01:58.899466    4071 log.go:172] (0xc0008140a0) (5) Data frame handling\nI0720 22:01:58.899522    4071 log.go:172] (0xc000a98a50) Data frame received for 7\nI0720 22:01:58.899568    4071 log.go:172] (0xc000814140) (7) Data frame handling\nI0720 22:01:58.900062    4071 log.go:172] (0xc000a98a50) Data frame received for 1\nI0720 22:01:58.900095    4071 log.go:172] (0xc0004021e0) (1) Data frame handling\nI0720 22:01:58.900120    4071 log.go:172] (0xc0004021e0) (1) Data frame sent\nI0720 22:01:58.900146    4071 log.go:172] (0xc000a98a50) (0xc0004021e0) Stream removed, broadcasting: 1\nI0720 22:01:58.900415    4071 log.go:172] (0xc000a98a50) (0xc000814000) Stream removed, broadcasting: 3\nI0720 22:01:58.900456    4071 log.go:172] (0xc000a98a50) Go away received\nI0720 22:01:58.900617    4071 log.go:172] (0xc000a98a50) (0xc0004021e0) Stream removed, broadcasting: 1\nI0720 22:01:58.900648    4071 log.go:172] (0xc000a98a50) (0xc000814000) Stream removed, broadcasting: 3\nI0720 22:01:58.900667    4071 log.go:172] (0xc000a98a50) (0xc0008140a0) Stream removed, broadcasting: 5\nI0720 22:01:58.900684    4071 log.go:172] (0xc000a98a50) (0xc000814140) Stream removed, broadcasting: 7\n"
Jul 20 22:01:58.942: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:00.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5386" for this suite.

• [SLOW TEST:5.822 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":252,"skipped":4140,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:00.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:02:01.032: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul 20 22:02:03.067: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:04.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5323" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":253,"skipped":4190,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:04.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 20 22:02:08.950: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:09.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7808" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4211,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:09.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0720 22:02:39.483099       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 20 22:02:39.483: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:39.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8142" for this suite.

• [SLOW TEST:30.451 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":255,"skipped":4232,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:39.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 22:02:39.926: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 22:02:41.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879360, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 22:02:43.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879360, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879359, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 22:02:46.977: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:47.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6312" for this suite.
STEP: Destroying namespace "webhook-6312-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.622 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":256,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:47.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 20 22:02:51.240: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:51.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8373" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4275,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:51.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 20 22:02:51.561: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663" in namespace "projected-5584" to be "success or failure"
Jul 20 22:02:51.579: INFO: Pod "downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663": Phase="Pending", Reason="", readiness=false. Elapsed: 17.516783ms
Jul 20 22:02:53.732: INFO: Pod "downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170710771s
Jul 20 22:02:55.735: INFO: Pod "downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174223386s
STEP: Saw pod success
Jul 20 22:02:55.735: INFO: Pod "downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663" satisfied condition "success or failure"
Jul 20 22:02:55.738: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663 container client-container: 
STEP: delete the pod
Jul 20 22:02:55.759: INFO: Waiting for pod downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663 to disappear
Jul 20 22:02:55.764: INFO: Pod downwardapi-volume-e8da5bc3-9b11-419f-baa6-2fda5edd0663 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:02:55.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5584" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4281,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:02:55.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:07.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7183" for this suite.

• [SLOW TEST:11.391 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":259,"skipped":4282,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:07.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-5629
STEP: creating replication controller nodeport-test in namespace services-5629
I0720 22:03:07.477382       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-5629, replica count: 2
I0720 22:03:10.527832       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 22:03:13.528061       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 22:03:13.528: INFO: Creating new exec pod
Jul 20 22:03:18.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5629 execpodwvz6m -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul 20 22:03:18.807: INFO: stderr: "I0720 22:03:18.696524    4094 log.go:172] (0xc0000f5550) (0xc00077e000) Create stream\nI0720 22:03:18.696591    4094 log.go:172] (0xc0000f5550) (0xc00077e000) Stream added, broadcasting: 1\nI0720 22:03:18.701598    4094 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0720 22:03:18.701737    4094 log.go:172] (0xc0000f5550) (0xc0008c8000) Create stream\nI0720 22:03:18.701828    4094 log.go:172] (0xc0000f5550) (0xc0008c8000) Stream added, broadcasting: 3\nI0720 22:03:18.705866    4094 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0720 22:03:18.705908    4094 log.go:172] (0xc0000f5550) (0xc000667ae0) Create stream\nI0720 22:03:18.705924    4094 log.go:172] (0xc0000f5550) (0xc000667ae0) Stream added, broadcasting: 5\nI0720 22:03:18.706839    4094 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0720 22:03:18.800383    4094 log.go:172] (0xc0000f5550) Data frame received for 5\nI0720 22:03:18.800411    4094 log.go:172] (0xc000667ae0) (5) Data frame handling\nI0720 22:03:18.800429    4094 log.go:172] (0xc000667ae0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0720 22:03:18.800563    4094 log.go:172] (0xc0000f5550) Data frame received for 5\nI0720 22:03:18.800582    4094 log.go:172] (0xc000667ae0) (5) Data frame handling\nI0720 22:03:18.800592    4094 log.go:172] (0xc000667ae0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0720 22:03:18.801048    4094 log.go:172] (0xc0000f5550) Data frame received for 3\nI0720 22:03:18.801073    4094 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0720 22:03:18.801115    4094 log.go:172] (0xc0000f5550) Data frame received for 5\nI0720 22:03:18.801148    4094 log.go:172] (0xc000667ae0) (5) Data frame handling\nI0720 22:03:18.802901    4094 log.go:172] (0xc0000f5550) Data frame received for 1\nI0720 22:03:18.802918    4094 log.go:172] (0xc00077e000) (1) Data frame handling\nI0720 22:03:18.802927    4094 log.go:172] (0xc00077e000) (1) Data frame sent\nI0720 22:03:18.802937    4094 log.go:172] (0xc0000f5550) (0xc00077e000) Stream removed, broadcasting: 1\nI0720 22:03:18.803131    4094 log.go:172] (0xc0000f5550) Go away received\nI0720 22:03:18.803196    4094 log.go:172] (0xc0000f5550) (0xc00077e000) Stream removed, broadcasting: 1\nI0720 22:03:18.803209    4094 log.go:172] (0xc0000f5550) (0xc0008c8000) Stream removed, broadcasting: 3\nI0720 22:03:18.803214    4094 log.go:172] (0xc0000f5550) (0xc000667ae0) Stream removed, broadcasting: 5\n"
Jul 20 22:03:18.807: INFO: stdout: ""
Jul 20 22:03:18.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5629 execpodwvz6m -- /bin/sh -x -c nc -zv -t -w 2 10.109.180.18 80'
Jul 20 22:03:19.010: INFO: stderr: "I0720 22:03:18.945188    4114 log.go:172] (0xc0009e2000) (0xc0004414a0) Create stream\nI0720 22:03:18.945266    4114 log.go:172] (0xc0009e2000) (0xc0004414a0) Stream added, broadcasting: 1\nI0720 22:03:18.947727    4114 log.go:172] (0xc0009e2000) Reply frame received for 1\nI0720 22:03:18.947766    4114 log.go:172] (0xc0009e2000) (0xc000a12000) Create stream\nI0720 22:03:18.947778    4114 log.go:172] (0xc0009e2000) (0xc000a12000) Stream added, broadcasting: 3\nI0720 22:03:18.948487    4114 log.go:172] (0xc0009e2000) Reply frame received for 3\nI0720 22:03:18.948510    4114 log.go:172] (0xc0009e2000) (0xc000a46000) Create stream\nI0720 22:03:18.948518    4114 log.go:172] (0xc0009e2000) (0xc000a46000) Stream added, broadcasting: 5\nI0720 22:03:18.949273    4114 log.go:172] (0xc0009e2000) Reply frame received for 5\nI0720 22:03:19.003862    4114 log.go:172] (0xc0009e2000) Data frame received for 5\nI0720 22:03:19.003907    4114 log.go:172] (0xc000a46000) (5) Data frame handling\nI0720 22:03:19.003925    4114 log.go:172] (0xc000a46000) (5) Data frame sent\nI0720 22:03:19.003936    4114 log.go:172] (0xc0009e2000) Data frame received for 5\nI0720 22:03:19.003947    4114 log.go:172] (0xc000a46000) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.180.18 80\nConnection to 10.109.180.18 80 port [tcp/http] succeeded!\nI0720 22:03:19.003976    4114 log.go:172] (0xc0009e2000) Data frame received for 3\nI0720 22:03:19.003990    4114 log.go:172] (0xc000a12000) (3) Data frame handling\nI0720 22:03:19.005105    4114 log.go:172] (0xc0009e2000) Data frame received for 1\nI0720 22:03:19.005129    4114 log.go:172] (0xc0004414a0) (1) Data frame handling\nI0720 22:03:19.005142    4114 log.go:172] (0xc0004414a0) (1) Data frame sent\nI0720 22:03:19.005161    4114 log.go:172] (0xc0009e2000) (0xc0004414a0) Stream removed, broadcasting: 1\nI0720 22:03:19.005183    4114 log.go:172] (0xc0009e2000) Go away received\nI0720 22:03:19.005571    4114 log.go:172] (0xc0009e2000) (0xc0004414a0) Stream removed, broadcasting: 1\nI0720 22:03:19.005594    4114 log.go:172] (0xc0009e2000) (0xc000a12000) Stream removed, broadcasting: 3\nI0720 22:03:19.005608    4114 log.go:172] (0xc0009e2000) (0xc000a46000) Stream removed, broadcasting: 5\n"
Jul 20 22:03:19.010: INFO: stdout: ""
Jul 20 22:03:19.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5629 execpodwvz6m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31404'
Jul 20 22:03:19.194: INFO: stderr: "I0720 22:03:19.133880    4133 log.go:172] (0xc0009e80b0) (0xc000657f40) Create stream\nI0720 22:03:19.133926    4133 log.go:172] (0xc0009e80b0) (0xc000657f40) Stream added, broadcasting: 1\nI0720 22:03:19.135903    4133 log.go:172] (0xc0009e80b0) Reply frame received for 1\nI0720 22:03:19.135932    4133 log.go:172] (0xc0009e80b0) (0xc0005da8c0) Create stream\nI0720 22:03:19.135940    4133 log.go:172] (0xc0009e80b0) (0xc0005da8c0) Stream added, broadcasting: 3\nI0720 22:03:19.136717    4133 log.go:172] (0xc0009e80b0) Reply frame received for 3\nI0720 22:03:19.136858    4133 log.go:172] (0xc0009e80b0) (0xc000a2a000) Create stream\nI0720 22:03:19.136877    4133 log.go:172] (0xc0009e80b0) (0xc000a2a000) Stream added, broadcasting: 5\nI0720 22:03:19.137807    4133 log.go:172] (0xc0009e80b0) Reply frame received for 5\nI0720 22:03:19.187849    4133 log.go:172] (0xc0009e80b0) Data frame received for 3\nI0720 22:03:19.187890    4133 log.go:172] (0xc0005da8c0) (3) Data frame handling\nI0720 22:03:19.187912    4133 log.go:172] (0xc0009e80b0) Data frame received for 5\nI0720 22:03:19.187920    4133 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0720 22:03:19.187929    4133 log.go:172] (0xc000a2a000) (5) Data frame sent\nI0720 22:03:19.187937    4133 log.go:172] (0xc0009e80b0) Data frame received for 5\nI0720 22:03:19.187944    4133 log.go:172] (0xc000a2a000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.6 31404\nConnection to 172.18.0.6 31404 port [tcp/31404] succeeded!\nI0720 22:03:19.189393    4133 log.go:172] (0xc0009e80b0) Data frame received for 1\nI0720 22:03:19.189422    4133 log.go:172] (0xc000657f40) (1) Data frame handling\nI0720 22:03:19.189434    4133 log.go:172] (0xc000657f40) (1) Data frame sent\nI0720 22:03:19.189459    4133 log.go:172] (0xc0009e80b0) (0xc000657f40) Stream removed, broadcasting: 1\nI0720 22:03:19.189474    4133 log.go:172] (0xc0009e80b0) Go away received\nI0720 22:03:19.189781    4133 log.go:172] (0xc0009e80b0) (0xc000657f40) Stream removed, broadcasting: 1\nI0720 22:03:19.189798    4133 log.go:172] (0xc0009e80b0) (0xc0005da8c0) Stream removed, broadcasting: 3\nI0720 22:03:19.189806    4133 log.go:172] (0xc0009e80b0) (0xc000a2a000) Stream removed, broadcasting: 5\n"
Jul 20 22:03:19.194: INFO: stdout: ""
Jul 20 22:03:19.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5629 execpodwvz6m -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31404'
Jul 20 22:03:19.414: INFO: stderr: "I0720 22:03:19.328964    4156 log.go:172] (0xc0006d4630) (0xc000940000) Create stream\nI0720 22:03:19.329034    4156 log.go:172] (0xc0006d4630) (0xc000940000) Stream added, broadcasting: 1\nI0720 22:03:19.331819    4156 log.go:172] (0xc0006d4630) Reply frame received for 1\nI0720 22:03:19.331879    4156 log.go:172] (0xc0006d4630) (0xc000627ae0) Create stream\nI0720 22:03:19.331894    4156 log.go:172] (0xc0006d4630) (0xc000627ae0) Stream added, broadcasting: 3\nI0720 22:03:19.333099    4156 log.go:172] (0xc0006d4630) Reply frame received for 3\nI0720 22:03:19.333141    4156 log.go:172] (0xc0006d4630) (0xc0009400a0) Create stream\nI0720 22:03:19.333158    4156 log.go:172] (0xc0006d4630) (0xc0009400a0) Stream added, broadcasting: 5\nI0720 22:03:19.334196    4156 log.go:172] (0xc0006d4630) Reply frame received for 5\nI0720 22:03:19.407637    4156 log.go:172] (0xc0006d4630) Data frame received for 3\nI0720 22:03:19.407698    4156 log.go:172] (0xc000627ae0) (3) Data frame handling\nI0720 22:03:19.407728    4156 log.go:172] (0xc0006d4630) Data frame received for 5\nI0720 22:03:19.407753    4156 log.go:172] (0xc0009400a0) (5) Data frame handling\nI0720 22:03:19.407785    4156 log.go:172] (0xc0009400a0) (5) Data frame sent\nI0720 22:03:19.407797    4156 log.go:172] (0xc0006d4630) Data frame received for 5\nI0720 22:03:19.407805    4156 log.go:172] (0xc0009400a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 31404\nConnection to 172.18.0.10 31404 port [tcp/31404] succeeded!\nI0720 22:03:19.409202    4156 log.go:172] (0xc0006d4630) Data frame received for 1\nI0720 22:03:19.409228    4156 log.go:172] (0xc000940000) (1) Data frame handling\nI0720 22:03:19.409246    4156 log.go:172] (0xc000940000) (1) Data frame sent\nI0720 22:03:19.409269    4156 log.go:172] (0xc0006d4630) (0xc000940000) Stream removed, broadcasting: 1\nI0720 22:03:19.409286    4156 log.go:172] (0xc0006d4630) Go away received\nI0720 22:03:19.409739    4156 log.go:172] (0xc0006d4630) (0xc000940000) Stream removed, broadcasting: 1\nI0720 22:03:19.409761    4156 log.go:172] (0xc0006d4630) (0xc000627ae0) Stream removed, broadcasting: 3\nI0720 22:03:19.409774    4156 log.go:172] (0xc0006d4630) (0xc0009400a0) Stream removed, broadcasting: 5\n"
Jul 20 22:03:19.414: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:19.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5629" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.259 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":260,"skipped":4288,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:19.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test hostPath mode
Jul 20 22:03:19.550: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5747" to be "success or failure"
Jul 20 22:03:19.555: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.698643ms
Jul 20 22:03:21.561: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011650687s
Jul 20 22:03:23.633: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083327928s
Jul 20 22:03:25.637: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087082349s
STEP: Saw pod success
Jul 20 22:03:25.637: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul 20 22:03:25.640: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 20 22:03:25.692: INFO: Waiting for pod pod-host-path-test to disappear
Jul 20 22:03:25.722: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:25.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5747" for this suite.

• [SLOW TEST:6.308 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4293,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:25.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Jul 20 22:03:25.791: INFO: Waiting up to 5m0s for pod "downward-api-db09326d-500b-40fe-84ae-dc383c8071d2" in namespace "downward-api-8244" to be "success or failure"
Jul 20 22:03:25.795: INFO: Pod "downward-api-db09326d-500b-40fe-84ae-dc383c8071d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.532684ms
Jul 20 22:03:27.873: INFO: Pod "downward-api-db09326d-500b-40fe-84ae-dc383c8071d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081120344s
Jul 20 22:03:30.017: INFO: Pod "downward-api-db09326d-500b-40fe-84ae-dc383c8071d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225044857s
STEP: Saw pod success
Jul 20 22:03:30.017: INFO: Pod "downward-api-db09326d-500b-40fe-84ae-dc383c8071d2" satisfied condition "success or failure"
Jul 20 22:03:30.052: INFO: Trying to get logs from node jerma-worker2 pod downward-api-db09326d-500b-40fe-84ae-dc383c8071d2 container dapi-container: 
STEP: delete the pod
Jul 20 22:03:30.078: INFO: Waiting for pod downward-api-db09326d-500b-40fe-84ae-dc383c8071d2 to disappear
Jul 20 22:03:30.081: INFO: Pod downward-api-db09326d-500b-40fe-84ae-dc383c8071d2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:30.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8244" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4335,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:30.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Jul 20 22:03:30.173: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17" in namespace "projected-7913" to be "success or failure"
Jul 20 22:03:30.195: INFO: Pod "downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17": Phase="Pending", Reason="", readiness=false. Elapsed: 22.125376ms
Jul 20 22:03:32.199: INFO: Pod "downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025882239s
Jul 20 22:03:34.203: INFO: Pod "downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030252392s
STEP: Saw pod success
Jul 20 22:03:34.203: INFO: Pod "downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17" satisfied condition "success or failure"
Jul 20 22:03:34.206: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17 container client-container: 
STEP: delete the pod
Jul 20 22:03:34.281: INFO: Waiting for pod downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17 to disappear
Jul 20 22:03:34.417: INFO: Pod downwardapi-volume-9de32f7c-5803-47cd-9a18-62aafaa37f17 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:34.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7913" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4335,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:34.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:03:51.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5590" for this suite.

• [SLOW TEST:17.159 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":264,"skipped":4357,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:03:51.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 22:03:52.323: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 22:03:54.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 22:03:56.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879432, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 22:03:59.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:03:59.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8723-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:00.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8229" for this suite.
STEP: Destroying namespace "webhook-8229-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.116 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":265,"skipped":4362,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:00.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-964aafbf-ff62-4230-8b01-8639879b2961
STEP: Creating a pod to test consume secrets
Jul 20 22:04:00.888: INFO: Waiting up to 5m0s for pod "pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a" in namespace "secrets-1902" to be "success or failure"
Jul 20 22:04:00.969: INFO: Pod "pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 80.727454ms
Jul 20 22:04:02.972: INFO: Pod "pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084529657s
Jul 20 22:04:04.977: INFO: Pod "pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088960662s
STEP: Saw pod success
Jul 20 22:04:04.977: INFO: Pod "pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a" satisfied condition "success or failure"
Jul 20 22:04:04.980: INFO: Trying to get logs from node jerma-worker pod pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a container secret-env-test: 
STEP: delete the pod
Jul 20 22:04:05.006: INFO: Waiting for pod pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a to disappear
Jul 20 22:04:05.022: INFO: Pod pod-secrets-90a955a7-c5f8-4cf1-981a-815d18ef0a1a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:05.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1902" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4382,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:05.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:21.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8528" for this suite.

• [SLOW TEST:16.235 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":267,"skipped":4399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:21.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 20 22:04:21.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 20 22:04:24.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879461, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 20 22:04:26.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879462, loc:(*time.Location)(0x78f7140)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63730879461, loc:(*time.Location)(0x78f7140)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 20 22:04:29.075: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:29.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2429" for this suite.
STEP: Destroying namespace "webhook-2429-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.977 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":268,"skipped":4442,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:29.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3739
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3739
I0720 22:04:29.395662       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3739, replica count: 2
I0720 22:04:32.446060       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0720 22:04:35.446418       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 20 22:04:35.446: INFO: Creating new exec pod
Jul 20 22:04:40.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3739 execpodjc7nk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 20 22:04:43.506: INFO: stderr: "I0720 22:04:43.436232    4179 log.go:172] (0xc00010bad0) (0xc0006f3e00) Create stream\nI0720 22:04:43.436263    4179 log.go:172] (0xc00010bad0) (0xc0006f3e00) Stream added, broadcasting: 1\nI0720 22:04:43.438269    4179 log.go:172] (0xc00010bad0) Reply frame received for 1\nI0720 22:04:43.438305    4179 log.go:172] (0xc00010bad0) (0xc0006025a0) Create stream\nI0720 22:04:43.438317    4179 log.go:172] (0xc00010bad0) (0xc0006025a0) Stream added, broadcasting: 3\nI0720 22:04:43.439089    4179 log.go:172] (0xc00010bad0) Reply frame received for 3\nI0720 22:04:43.439135    4179 log.go:172] (0xc00010bad0) (0xc0003e9360) Create stream\nI0720 22:04:43.439154    4179 log.go:172] (0xc00010bad0) (0xc0003e9360) Stream added, broadcasting: 5\nI0720 22:04:43.439902    4179 log.go:172] (0xc00010bad0) Reply frame received for 5\nI0720 22:04:43.497407    4179 log.go:172] (0xc00010bad0) Data frame received for 5\nI0720 22:04:43.497439    4179 log.go:172] (0xc0003e9360) (5) Data frame handling\nI0720 22:04:43.497459    4179 log.go:172] (0xc0003e9360) (5) Data frame sent\nI0720 22:04:43.497473    4179 log.go:172] (0xc00010bad0) Data frame received for 5\nI0720 22:04:43.497487    4179 log.go:172] (0xc0003e9360) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0720 22:04:43.497517    4179 log.go:172] (0xc0003e9360) (5) Data frame sent\nI0720 22:04:43.497883    4179 log.go:172] (0xc00010bad0) Data frame received for 3\nI0720 22:04:43.497909    4179 log.go:172] (0xc0006025a0) (3) Data frame handling\nI0720 22:04:43.498007    4179 log.go:172] (0xc00010bad0) Data frame received for 5\nI0720 22:04:43.498024    4179 log.go:172] (0xc0003e9360) (5) Data frame handling\nI0720 22:04:43.499650    4179 log.go:172] (0xc00010bad0) Data frame received for 1\nI0720 22:04:43.499675    4179 log.go:172] (0xc0006f3e00) (1) Data frame handling\nI0720 22:04:43.499717    4179 log.go:172] (0xc0006f3e00) (1) Data frame sent\nI0720 22:04:43.499824    4179 log.go:172] (0xc00010bad0) (0xc0006f3e00) Stream removed, broadcasting: 1\nI0720 22:04:43.500101    4179 log.go:172] (0xc00010bad0) Go away received\nI0720 22:04:43.500354    4179 log.go:172] (0xc00010bad0) (0xc0006f3e00) Stream removed, broadcasting: 1\nI0720 22:04:43.500381    4179 log.go:172] (0xc00010bad0) (0xc0006025a0) Stream removed, broadcasting: 3\nI0720 22:04:43.500394    4179 log.go:172] (0xc00010bad0) (0xc0003e9360) Stream removed, broadcasting: 5\n"
Jul 20 22:04:43.507: INFO: stdout: ""
Jul 20 22:04:43.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3739 execpodjc7nk -- /bin/sh -x -c nc -zv -t -w 2 10.106.32.253 80'
Jul 20 22:04:43.723: INFO: stderr: "I0720 22:04:43.642437    4210 log.go:172] (0xc00055cd10) (0xc000a6a000) Create stream\nI0720 22:04:43.642492    4210 log.go:172] (0xc00055cd10) (0xc000a6a000) Stream added, broadcasting: 1\nI0720 22:04:43.645084    4210 log.go:172] (0xc00055cd10) Reply frame received for 1\nI0720 22:04:43.645128    4210 log.go:172] (0xc00055cd10) (0xc000715a40) Create stream\nI0720 22:04:43.645142    4210 log.go:172] (0xc00055cd10) (0xc000715a40) Stream added, broadcasting: 3\nI0720 22:04:43.646155    4210 log.go:172] (0xc00055cd10) Reply frame received for 3\nI0720 22:04:43.646212    4210 log.go:172] (0xc00055cd10) (0xc000a6a0a0) Create stream\nI0720 22:04:43.646285    4210 log.go:172] (0xc00055cd10) (0xc000a6a0a0) Stream added, broadcasting: 5\nI0720 22:04:43.647178    4210 log.go:172] (0xc00055cd10) Reply frame received for 5\nI0720 22:04:43.715912    4210 log.go:172] (0xc00055cd10) Data frame received for 5\nI0720 22:04:43.715946    4210 log.go:172] (0xc000a6a0a0) (5) Data frame handling\nI0720 22:04:43.715972    4210 log.go:172] (0xc000a6a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.32.253 80\nConnection to 10.106.32.253 80 port [tcp/http] succeeded!\nI0720 22:04:43.716156    4210 log.go:172] (0xc00055cd10) Data frame received for 3\nI0720 22:04:43.716225    4210 log.go:172] (0xc000715a40) (3) Data frame handling\nI0720 22:04:43.716265    4210 log.go:172] (0xc00055cd10) Data frame received for 5\nI0720 22:04:43.716290    4210 log.go:172] (0xc000a6a0a0) (5) Data frame handling\nI0720 22:04:43.717649    4210 log.go:172] (0xc00055cd10) Data frame received for 1\nI0720 22:04:43.717772    4210 log.go:172] (0xc000a6a000) (1) Data frame handling\nI0720 22:04:43.717823    4210 log.go:172] (0xc000a6a000) (1) Data frame sent\nI0720 22:04:43.717852    4210 log.go:172] (0xc00055cd10) (0xc000a6a000) Stream removed, broadcasting: 1\nI0720 22:04:43.717881    4210 log.go:172] (0xc00055cd10) Go away received\nI0720 22:04:43.718353    4210 log.go:172] (0xc00055cd10) (0xc000a6a000) Stream removed, broadcasting: 1\nI0720 22:04:43.718392    4210 log.go:172] (0xc00055cd10) (0xc000715a40) Stream removed, broadcasting: 3\nI0720 22:04:43.718421    4210 log.go:172] (0xc00055cd10) (0xc000a6a0a0) Stream removed, broadcasting: 5\n"
Jul 20 22:04:43.724: INFO: stdout: ""
Jul 20 22:04:43.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3739 execpodjc7nk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31133'
Jul 20 22:04:43.921: INFO: stderr: "I0720 22:04:43.847316    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2000) Create stream\nI0720 22:04:43.847374    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2000) Stream added, broadcasting: 1\nI0720 22:04:43.850971    4232 log.go:172] (0xc0006ec9a0) Reply frame received for 1\nI0720 22:04:43.851024    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2140) Create stream\nI0720 22:04:43.851046    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2140) Stream added, broadcasting: 3\nI0720 22:04:43.852086    4232 log.go:172] (0xc0006ec9a0) Reply frame received for 3\nI0720 22:04:43.852125    4232 log.go:172] (0xc0006ec9a0) (0xc0006e21e0) Create stream\nI0720 22:04:43.852139    4232 log.go:172] (0xc0006ec9a0) (0xc0006e21e0) Stream added, broadcasting: 5\nI0720 22:04:43.853281    4232 log.go:172] (0xc0006ec9a0) Reply frame received for 5\nI0720 22:04:43.914955    4232 log.go:172] (0xc0006ec9a0) Data frame received for 3\nI0720 22:04:43.914999    4232 log.go:172] (0xc0006e2140) (3) Data frame handling\nI0720 22:04:43.915042    4232 log.go:172] (0xc0006ec9a0) Data frame received for 5\nI0720 22:04:43.915059    4232 log.go:172] (0xc0006e21e0) (5) Data frame handling\nI0720 22:04:43.915078    4232 log.go:172] (0xc0006e21e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.6 31133\nConnection to 172.18.0.6 31133 port [tcp/31133] succeeded!\nI0720 22:04:43.915184    4232 log.go:172] (0xc0006ec9a0) Data frame received for 5\nI0720 22:04:43.915207    4232 log.go:172] (0xc0006e21e0) (5) Data frame handling\nI0720 22:04:43.916693    4232 log.go:172] (0xc0006ec9a0) Data frame received for 1\nI0720 22:04:43.916713    4232 log.go:172] (0xc0006e2000) (1) Data frame handling\nI0720 22:04:43.916856    4232 log.go:172] (0xc0006e2000) (1) Data frame sent\nI0720 22:04:43.916879    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2000) Stream removed, broadcasting: 1\nI0720 22:04:43.916901    4232 log.go:172] (0xc0006ec9a0) Go away received\nI0720 22:04:43.917300    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2000) Stream removed, broadcasting: 1\nI0720 22:04:43.917319    4232 log.go:172] (0xc0006ec9a0) (0xc0006e2140) Stream removed, broadcasting: 3\nI0720 22:04:43.917328    4232 log.go:172] (0xc0006ec9a0) (0xc0006e21e0) Stream removed, broadcasting: 5\n"
Jul 20 22:04:43.921: INFO: stdout: ""
Jul 20 22:04:43.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3739 execpodjc7nk -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31133'
Jul 20 22:04:44.111: INFO: stderr: "I0720 22:04:44.036356    4253 log.go:172] (0xc000a000b0) (0xc000a54460) Create stream\nI0720 22:04:44.036413    4253 log.go:172] (0xc000a000b0) (0xc000a54460) Stream added, broadcasting: 1\nI0720 22:04:44.039434    4253 log.go:172] (0xc000a000b0) Reply frame received for 1\nI0720 22:04:44.039469    4253 log.go:172] (0xc000a000b0) (0xc00041fc20) Create stream\nI0720 22:04:44.039477    4253 log.go:172] (0xc000a000b0) (0xc00041fc20) Stream added, broadcasting: 3\nI0720 22:04:44.040585    4253 log.go:172] (0xc000a000b0) Reply frame received for 3\nI0720 22:04:44.040604    4253 log.go:172] (0xc000a000b0) (0xc000a54500) Create stream\nI0720 22:04:44.040611    4253 log.go:172] (0xc000a000b0) (0xc000a54500) Stream added, broadcasting: 5\nI0720 22:04:44.041703    4253 log.go:172] (0xc000a000b0) Reply frame received for 5\nI0720 22:04:44.104145    4253 log.go:172] (0xc000a000b0) Data frame received for 3\nI0720 22:04:44.104170    4253 log.go:172] (0xc00041fc20) (3) Data frame handling\nI0720 22:04:44.104209    4253 log.go:172] (0xc000a000b0) Data frame received for 5\nI0720 22:04:44.104237    4253 log.go:172] (0xc000a54500) (5) Data frame handling\nI0720 22:04:44.104260    4253 log.go:172] (0xc000a54500) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.10 31133\nConnection to 172.18.0.10 31133 port [tcp/31133] succeeded!\nI0720 22:04:44.105068    4253 log.go:172] (0xc000a000b0) Data frame received for 5\nI0720 22:04:44.105093    4253 log.go:172] (0xc000a54500) (5) Data frame handling\nI0720 22:04:44.106600    4253 log.go:172] (0xc000a000b0) Data frame received for 1\nI0720 22:04:44.106631    4253 log.go:172] (0xc000a54460) (1) Data frame handling\nI0720 22:04:44.106660    4253 log.go:172] (0xc000a54460) (1) Data frame sent\nI0720 22:04:44.106685    4253 log.go:172] (0xc000a000b0) (0xc000a54460) Stream removed, broadcasting: 1\nI0720 22:04:44.106703    4253 log.go:172] (0xc000a000b0) Go away received\nI0720 22:04:44.106944    4253 log.go:172] (0xc000a000b0) (0xc000a54460) Stream removed, broadcasting: 1\nI0720 22:04:44.106959    4253 log.go:172] (0xc000a000b0) (0xc00041fc20) Stream removed, broadcasting: 3\nI0720 22:04:44.106965    4253 log.go:172] (0xc000a000b0) (0xc000a54500) Stream removed, broadcasting: 5\n"
Jul 20 22:04:44.111: INFO: stdout: ""
Jul 20 22:04:44.111: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:44.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3739" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.967 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":269,"skipped":4450,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:44.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:04:44.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 20 22:04:46.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1584 create -f -'
Jul 20 22:04:50.236: INFO: stderr: ""
Jul 20 22:04:50.236: INFO: stdout: "e2e-test-crd-publish-openapi-1551-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 20 22:04:50.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1584 delete e2e-test-crd-publish-openapi-1551-crds test-cr'
Jul 20 22:04:50.687: INFO: stderr: ""
Jul 20 22:04:50.687: INFO: stdout: "e2e-test-crd-publish-openapi-1551-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jul 20 22:04:50.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1584 apply -f -'
Jul 20 22:04:51.063: INFO: stderr: ""
Jul 20 22:04:51.063: INFO: stdout: "e2e-test-crd-publish-openapi-1551-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jul 20 22:04:51.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1584 delete e2e-test-crd-publish-openapi-1551-crds test-cr'
Jul 20 22:04:51.184: INFO: stderr: ""
Jul 20 22:04:51.184: INFO: stdout: "e2e-test-crd-publish-openapi-1551-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 20 22:04:51.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1551-crds'
Jul 20 22:04:51.409: INFO: stderr: ""
Jul 20 22:04:51.409: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1551-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:04:54.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1584" for this suite.

• [SLOW TEST:10.102 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":270,"skipped":4467,"failed":0}
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:04:54.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:04:54.389: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-2b9318bc-e0a3-45fc-a229-54d2271758fc
STEP: Creating secret with name s-test-opt-upd-c2b0ca4a-4de4-4d83-8ba3-9474d0b4356c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2b9318bc-e0a3-45fc-a229-54d2271758fc
STEP: Updating secret s-test-opt-upd-c2b0ca4a-4de4-4d83-8ba3-9474d0b4356c
STEP: Creating secret with name s-test-opt-create-360fc70a-ee49-4c25-8307-3d5dd11cfba2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:06:23.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2904" for this suite.

• [SLOW TEST:88.603 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4468,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:06:23.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-downwardapi-r5md
STEP: Creating a pod to test atomic-volume-subpath
Jul 20 22:06:23.155: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-r5md" in namespace "subpath-7535" to be "success or failure"
Jul 20 22:06:23.177: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Pending", Reason="", readiness=false. Elapsed: 21.418993ms
Jul 20 22:06:25.199: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043356927s
Jul 20 22:06:27.203: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 4.047408261s
Jul 20 22:06:29.207: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 6.051876261s
Jul 20 22:06:31.212: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 8.056890614s
Jul 20 22:06:33.217: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 10.061103483s
Jul 20 22:06:35.220: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 12.064844715s
Jul 20 22:06:37.225: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 14.069083503s
Jul 20 22:06:39.228: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 16.07284154s
Jul 20 22:06:41.235: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 18.079036771s
Jul 20 22:06:43.239: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 20.083205617s
Jul 20 22:06:45.243: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 22.087540009s
Jul 20 22:06:47.247: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Running", Reason="", readiness=true. Elapsed: 24.091668597s
Jul 20 22:06:49.251: INFO: Pod "pod-subpath-test-downwardapi-r5md": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.095150587s
STEP: Saw pod success
Jul 20 22:06:49.251: INFO: Pod "pod-subpath-test-downwardapi-r5md" satisfied condition "success or failure"
Jul 20 22:06:49.253: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-r5md container test-container-subpath-downwardapi-r5md: 
STEP: delete the pod
Jul 20 22:06:49.290: INFO: Waiting for pod pod-subpath-test-downwardapi-r5md to disappear
Jul 20 22:06:49.295: INFO: Pod pod-subpath-test-downwardapi-r5md no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-r5md
Jul 20 22:06:49.295: INFO: Deleting pod "pod-subpath-test-downwardapi-r5md" in namespace "subpath-7535"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:06:49.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7535" for this suite.

• [SLOW TEST:26.246 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":273,"skipped":4485,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:06:49.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 20 22:06:49.393: INFO: Waiting up to 5m0s for pod "pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd" in namespace "emptydir-111" to be "success or failure"
Jul 20 22:06:49.397: INFO: Pod "pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404631ms
Jul 20 22:06:51.401: INFO: Pod "pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008195539s
Jul 20 22:06:53.405: INFO: Pod "pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011999448s
STEP: Saw pod success
Jul 20 22:06:53.405: INFO: Pod "pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd" satisfied condition "success or failure"
Jul 20 22:06:53.408: INFO: Trying to get logs from node jerma-worker2 pod pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd container test-container: 
STEP: delete the pod
Jul 20 22:06:53.425: INFO: Waiting for pod pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd to disappear
Jul 20 22:06:53.447: INFO: Pod pod-1bbe45f3-d8ac-411f-8c27-e90d98ae44dd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:06:53.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-111" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4497,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:06:53.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:06:53.639: INFO: Creating deployment "webserver-deployment"
Jul 20 22:06:53.658: INFO: Waiting for observed generation 1
Jul 20 22:06:55.667: INFO: Waiting for all required pods to come up
Jul 20 22:06:55.671: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 20 22:07:05.679: INFO: Waiting for deployment "webserver-deployment" to complete
Jul 20 22:07:05.685: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul 20 22:07:05.690: INFO: Updating deployment webserver-deployment
Jul 20 22:07:05.690: INFO: Waiting for observed generation 2
Jul 20 22:07:07.804: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 20 22:07:07.807: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 20 22:07:07.809: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 20 22:07:07.815: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 20 22:07:07.815: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 20 22:07:07.817: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 20 22:07:07.821: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul 20 22:07:07.821: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul 20 22:07:07.825: INFO: Updating deployment webserver-deployment
Jul 20 22:07:07.825: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul 20 22:07:08.104: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 20 22:07:08.526: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Jul 20 22:07:09.052: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2934 /apis/apps/v1/namespaces/deployment-2934/deployments/webserver-deployment 7590033e-271e-4db9-b362-6079fd87f430 2886024 3 2020-07-20 22:06:53 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005425978  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-07-20 22:07:06 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-20 22:07:08 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul 20 22:07:09.206: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-2934 /apis/apps/v1/namespaces/deployment-2934/replicasets/webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 2886071 3 2020-07-20 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7590033e-271e-4db9-b362-6079fd87f430 0xc002f1dee7 0xc002f1dee8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f1df58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 20 22:07:09.206: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul 20 22:07:09.206: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-2934 /apis/apps/v1/namespaces/deployment-2934/replicasets/webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 2886063 3 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7590033e-271e-4db9-b362-6079fd87f430 0xc002f1de27 0xc002f1de28}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f1de88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul 20 22:07:09.248: INFO: Pod "webserver-deployment-595b5b9587-2kbcf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2kbcf webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-2kbcf 945272b9-3e8c-4550-bb92-99fb3a632911 2886068 0 2020-07-20 22:07:07 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4830 0xc002be4831}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-20 22:07:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.248: INFO: Pod "webserver-deployment-595b5b9587-2m2h4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2m2h4 webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-2m2h4 442bf41f-18dc-4449-b373-3b4db0405788 2886049 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4987 0xc002be4988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.249: INFO: Pod "webserver-deployment-595b5b9587-4mzbx" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4mzbx webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-4mzbx 00a612f6-149c-41db-ae0f-73f77d9e4f0e 2885892 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4aa7 0xc002be4aa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.213,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:06:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c666c631a328efcb993204427be407f133fb63f96f0cfee921e918809583f49,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.213,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.249: INFO: Pod "webserver-deployment-595b5b9587-4zddm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4zddm webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-4zddm be0156e1-c186-4308-9edb-f19062fe568a 2886060 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4c27 0xc002be4c28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.249: INFO: Pod "webserver-deployment-595b5b9587-5rrbx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5rrbx webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-5rrbx 210179c9-98a2-403c-a30c-3f982e206f0d 2886048 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4d47 0xc002be4d48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.249: INFO: Pod "webserver-deployment-595b5b9587-67zzs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-67zzs webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-67zzs 039e3114-b576-4878-a4a8-e359ddbaee9d 2885907 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4e67 0xc002be4e68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.172,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fc79587f3454c7ab0e1adae9a9bb37aec98bcae46d4e23ca7c2f8ae05e278a77,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.172,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.249: INFO: Pod "webserver-deployment-595b5b9587-8zf7q" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8zf7q webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-8zf7q 8046cd8e-b2ee-45a8-a2f0-f29bcd611992 2885932 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be4fe7 0xc002be4fe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.174,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec389392613f37a446ccacb17c368c5035a04d3172b125a0fcaad38656152bb5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.250: INFO: Pod "webserver-deployment-595b5b9587-fpcb8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fpcb8 webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-fpcb8 96791edf-327e-442a-88b4-5969c86694f7 2886061 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5167 0xc002be5168}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.250: INFO: Pod "webserver-deployment-595b5b9587-lg4zd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lg4zd webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-lg4zd 95a5eae7-aa34-44a5-b578-fb259c5813d2 2885924 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5287 0xc002be5288}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.215,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b52ef726459515f883fd68309fc1df5f1d664e7bf9d467d9c2b613baac8abd89,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.250: INFO: Pod "webserver-deployment-595b5b9587-qp6hg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qp6hg webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-qp6hg e254b672-d137-421a-b37e-0f11cc6aa682 2886057 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5407 0xc002be5408}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.250: INFO: Pod "webserver-deployment-595b5b9587-rhlmr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rhlmr webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-rhlmr 3b98b3c6-f233-4c4d-86a8-6846b6843a77 2885926 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5527 0xc002be5528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.173,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://82d7a9ea4fa43a9f337abe5216853b5f1bf9fd675e589c1a29d17e50d3bb9533,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.173,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.250: INFO: Pod "webserver-deployment-595b5b9587-rlg7j" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rlg7j webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-rlg7j 6569a22c-f0a2-412e-a3ca-f1d27d1e4486 2886080 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be56a7 0xc002be56a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-20 22:07:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.251: INFO: Pod "webserver-deployment-595b5b9587-rm8rq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rm8rq webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-rm8rq f2ac40b0-7c17-4d74-8c69-b4b76e2ba505 2886035 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5807 0xc002be5808}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.251: INFO: Pod "webserver-deployment-595b5b9587-s4dk9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4dk9 webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-s4dk9 33191975-c742-4b00-95b3-5f75a3b2a7f6 2885908 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5927 0xc002be5928}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.214,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b00649b44540335f3c32c58e6df1624d59572942b8091275e752a772b6f6e7a1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.214,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.251: INFO: Pod "webserver-deployment-595b5b9587-vbl2d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vbl2d webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-vbl2d cf257a7e-1a9a-4789-a1af-c70d33268d3b 2886050 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5aa7 0xc002be5aa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.251: INFO: Pod "webserver-deployment-595b5b9587-w2kq6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w2kq6 webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-w2kq6 cac1c4c2-8d1e-4e40-9319-6bc676243647 2886055 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5bd7 0xc002be5bd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.251: INFO: Pod "webserver-deployment-595b5b9587-wblvw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wblvw webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-wblvw 7a5bd750-29e3-4673-a576-372d99e666fd 2886062 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5cf7 0xc002be5cf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.252: INFO: Pod "webserver-deployment-595b5b9587-wp8pp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wp8pp webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-wp8pp fd087c66-0b92-4c86-aa2c-ffea7f66f544 2886030 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5e17 0xc002be5e18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.252: INFO: Pod "webserver-deployment-595b5b9587-xzclh" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzclh webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-xzclh c3195c90-c191-4086-9b51-b556278f9636 2885941 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc002be5f37 0xc002be5f38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.216,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:07:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f7f4b7376e921b317c151753debc90663e835ba48941fd89c728b632acff2c25,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.216,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.252: INFO: Pod "webserver-deployment-595b5b9587-zr77f" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zr77f webserver-deployment-595b5b9587- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-595b5b9587-zr77f 12ac6d86-e971-49f7-98a5-f13e70155efa 2885871 0 2020-07-20 22:06:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 749f3b9a-3a26-4529-a3ff-8308c114ffec 0xc00209c477 0xc00209c478}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:06:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.2.212,StartTime:2020-07-20 22:06:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-20 22:06:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a105c9c2d70d36d8a96a4499c5cc8598543b51a2463e16cfa7c5246e0872368,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.212,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.252: INFO: Pod "webserver-deployment-c7997dcc8-5dnw7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dnw7 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-5dnw7 0c58f058-41e7-47af-9f5d-f1ec31f648d4 2886005 0 2020-07-20 22:07:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209c8a7 0xc00209c8a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-20 22:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.253: INFO: Pod "webserver-deployment-c7997dcc8-b9bsn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b9bsn webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-b9bsn f4e60a8d-db28-43b9-a575-c2ee20451802 2886058 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209ca87 0xc00209ca88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.253: INFO: Pod "webserver-deployment-c7997dcc8-bcw65" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bcw65 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-bcw65 a5df395b-8c5e-403c-aaf9-4aa414e87c22 2886073 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209cc97 0xc00209cc98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-20 22:07:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.253: INFO: Pod "webserver-deployment-c7997dcc8-btc95" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-btc95 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-btc95 e1a06660-4ad5-4a3d-b094-843fe7349c79 2885974 0 2020-07-20 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209ce27 0xc00209ce28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-07-20 22:07:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.253: INFO: Pod "webserver-deployment-c7997dcc8-f94kt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-f94kt webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-f94kt 093e5f90-cff0-4e69-9750-2eb2bd47d360 2886054 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209cfa7 0xc00209cfa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.254: INFO: Pod "webserver-deployment-c7997dcc8-ghnzn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ghnzn webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-ghnzn ed317ac8-7116-48d6-95db-29a2c82a89f9 2886053 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209d187 0xc00209d188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.254: INFO: Pod "webserver-deployment-c7997dcc8-j7rqw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j7rqw webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-j7rqw 662996e0-d35f-4ebf-bda7-f8573099b8c1 2886056 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209d357 0xc00209d358}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.254: INFO: Pod "webserver-deployment-c7997dcc8-j9x8g" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j9x8g webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-j9x8g c166830b-0068-483e-8269-f54585dd417d 2886067 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209d537 0xc00209d538}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.254: INFO: Pod "webserver-deployment-c7997dcc8-mv6kw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mv6kw webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-mv6kw 8a793855-63d5-4b5c-8793-edfb87eec7dc 2886046 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209d7d7 0xc00209d7d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.254: INFO: Pod "webserver-deployment-c7997dcc8-rsc98" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rsc98 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-rsc98 97597387-6f52-4bea-8e18-caf59c5e0736 2885983 0 2020-07-20 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209d987 0xc00209d988}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-20 22:07:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.255: INFO: Pod "webserver-deployment-c7997dcc8-v9v42" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v9v42 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-v9v42 edef129f-987d-4926-b8ed-5fa835ea0ea3 2885972 0 2020-07-20 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209dc27 0xc00209dc28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-20 22:07:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.255: INFO: Pod "webserver-deployment-c7997dcc8-wg4nq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wg4nq webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-wg4nq 721c698b-2e6b-45b8-afd6-d9b172aab984 2886051 0 2020-07-20 22:07:08 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc00209de67 0xc00209de68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 20 22:07:09.255: INFO: Pod "webserver-deployment-c7997dcc8-xrh45" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xrh45 webserver-deployment-c7997dcc8- deployment-2934 /api/v1/namespaces/deployment-2934/pods/webserver-deployment-c7997dcc8-xrh45 da70ecbd-8ffd-4d66-961f-80ce049b9378 2886008 0 2020-07-20 22:07:06 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 db334128-d85c-4186-b07f-43ddfc1e6cfd 0xc0028b2027 0xc0028b2028}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2sn6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2sn6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2sn6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-20 22:07:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-07-20 22:07:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:07:09.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2934" for this suite.

• [SLOW TEST:16.529 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":275,"skipped":4524,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:07:09.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:07:27.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2708" for this suite.

• [SLOW TEST:17.138 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4531,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:07:27.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 20 22:07:37.821: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:37.845: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 22:07:39.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:39.849: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 22:07:41.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:41.850: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 22:07:43.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:43.849: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 22:07:45.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:45.849: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 20 22:07:47.845: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 20 22:07:47.849: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:07:47.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9486" for this suite.

• [SLOW TEST:20.743 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4562,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Jul 20 22:07:47.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Jul 20 22:07:47.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul 20 22:07:50.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 create -f -'
Jul 20 22:07:55.666: INFO: stderr: ""
Jul 20 22:07:55.666: INFO: stdout: "e2e-test-crd-publish-openapi-982-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 20 22:07:55.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 delete e2e-test-crd-publish-openapi-982-crds test-foo'
Jul 20 22:07:55.804: INFO: stderr: ""
Jul 20 22:07:55.804: INFO: stdout: "e2e-test-crd-publish-openapi-982-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul 20 22:07:55.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 apply -f -'
Jul 20 22:07:56.084: INFO: stderr: ""
Jul 20 22:07:56.084: INFO: stdout: "e2e-test-crd-publish-openapi-982-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 20 22:07:56.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 delete e2e-test-crd-publish-openapi-982-crds test-foo'
Jul 20 22:07:56.238: INFO: stderr: ""
Jul 20 22:07:56.238: INFO: stdout: "e2e-test-crd-publish-openapi-982-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul 20 22:07:56.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 create -f -'
Jul 20 22:07:56.540: INFO: rc: 1
Jul 20 22:07:56.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 apply -f -'
Jul 20 22:07:56.846: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul 20 22:07:56.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 create -f -'
Jul 20 22:07:57.098: INFO: rc: 1
Jul 20 22:07:57.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6849 apply -f -'
Jul 20 22:07:57.299: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul 20 22:07:57.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-982-crds'
Jul 20 22:07:57.565: INFO: stderr: ""
Jul 20 22:07:57.565: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-982-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul 20 22:07:57.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-982-crds.metadata'
Jul 20 22:07:57.893: INFO: stderr: ""
Jul 20 22:07:57.893: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-982-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul 20 22:07:57.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-982-crds.spec'
Jul 20 22:07:58.583: INFO: stderr: ""
Jul 20 22:07:58.583: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-982-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul 20 22:07:58.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-982-crds.spec.bars'
Jul 20 22:07:59.009: INFO: stderr: ""
Jul 20 22:07:59.009: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-982-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul 20 22:07:59.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-982-crds.spec.bars2'
Jul 20 22:07:59.293: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Jul 20 22:08:02.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6849" for this suite.

• [SLOW TEST:14.309 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":278,"skipped":4565,"failed":0}
Jul 20 22:08:02.173: INFO: Running AfterSuite actions on all nodes
Jul 20 22:08:02.173: INFO: Running AfterSuite actions on node 1
Jul 20 22:08:02.173: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4565,"failed":0}

Ran 278 of 4843 Specs in 4672.695 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4565 Skipped
PASS