I0210 23:39:07.180969 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0210 23:39:07.181808 9 e2e.go:109] Starting e2e run "926c12fe-a8e8-47b4-bf1c-6765a596be64" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581377945 - Will randomize all specs Will run 280 of 4845 specs Feb 10 23:39:07.254: INFO: >>> kubeConfig: /root/.kube/config Feb 10 23:39:07.258: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 10 23:39:07.279: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 10 23:39:07.313: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 10 23:39:07.313: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 10 23:39:07.313: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 10 23:39:07.322: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 10 23:39:07.322: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 10 23:39:07.322: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 10 23:39:07.324: INFO: kube-apiserver version: v1.17.0 Feb 10 23:39:07.324: INFO: >>> kubeConfig: /root/.kube/config Feb 10 23:39:07.328: INFO: Cluster IP family: ipv4 S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:39:07.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container Feb 10 23:39:07.426: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 10 23:39:07.428: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:39:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4846" for this suite. • [SLOW TEST:13.080 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":1,"skipped":1,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:39:20.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7463 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 10 23:39:20.528: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 10 23:39:20.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:39:22.706: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:39:24.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:39:26.881: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:39:28.807: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:39:30.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:32.710: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:34.709: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:36.709: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:38.711: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:40.713: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:42.707: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 10 23:39:44.787: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 10 23:39:44.795: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 10 23:39:56.898: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7463 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:39:56.898: INFO: >>> kubeConfig: /root/.kube/config I0210 23:39:56.947999 9 log.go:172] (0xc002a6d130) (0xc0018465a0) Create stream I0210 23:39:56.948161 9 log.go:172] (0xc002a6d130) (0xc0018465a0) Stream added, broadcasting: 1 I0210 23:39:56.953298 9 log.go:172] (0xc002a6d130) Reply frame received for 1 I0210 23:39:56.953339 9 log.go:172] (0xc002a6d130) (0xc0027e6b40) Create stream I0210 23:39:56.953352 9 log.go:172] (0xc002a6d130) (0xc0027e6b40) Stream added, broadcasting: 3 I0210 23:39:56.956662 9 log.go:172] (0xc002a6d130) Reply frame received for 3 I0210 23:39:56.956857 9 log.go:172] (0xc002a6d130) (0xc000f74780) Create stream I0210 23:39:56.956875 9 log.go:172] (0xc002a6d130) (0xc000f74780) Stream added, broadcasting: 5 I0210 23:39:56.959875 9 log.go:172] (0xc002a6d130) Reply frame received for 5 I0210 23:39:58.055155 9 log.go:172] (0xc002a6d130) Data frame received for 3 I0210 23:39:58.055315 9 log.go:172] (0xc0027e6b40) (3) Data frame handling I0210 23:39:58.055363 9 log.go:172] (0xc0027e6b40) (3) Data frame sent I0210 23:39:58.152146 9 log.go:172] (0xc002a6d130) Data frame received for 1 I0210 23:39:58.152358 9 log.go:172] (0xc0018465a0) (1) Data frame handling I0210 23:39:58.152397 9 log.go:172] (0xc0018465a0) (1) Data frame sent I0210 23:39:58.152794 9 log.go:172] (0xc002a6d130) (0xc0018465a0) Stream removed, broadcasting: 1 I0210 23:39:58.153912 9 log.go:172] (0xc002a6d130) (0xc0027e6b40) Stream removed, broadcasting: 3 I0210 23:39:58.154015 9 log.go:172] (0xc002a6d130) (0xc000f74780) Stream removed, broadcasting: 5 I0210 23:39:58.154064 9 log.go:172] (0xc002a6d130) (0xc0018465a0) Stream removed, broadcasting: 1 I0210 23:39:58.154086 9 log.go:172] (0xc002a6d130) (0xc0027e6b40) Stream removed, broadcasting: 3 I0210 23:39:58.154094 9 log.go:172] (0xc002a6d130) (0xc000f74780) Stream removed, broadcasting: 5 Feb 10 23:39:58.154: INFO: Found all expected endpoints: [netserver-0] I0210 23:39:58.155246 9 log.go:172] (0xc002a6d130) Go away received Feb 10 23:39:58.163: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7463 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:39:58.164: INFO: >>> kubeConfig: /root/.kube/config I0210 23:39:58.211508 9 log.go:172] (0xc002dd8630) (0xc000f75360) Create stream I0210 23:39:58.211665 9 log.go:172] (0xc002dd8630) (0xc000f75360) Stream added, broadcasting: 1 I0210 23:39:58.229951 9 log.go:172] (0xc002dd8630) Reply frame received for 1 I0210 23:39:58.230093 9 log.go:172] (0xc002dd8630) (0xc0027e6c80) Create stream I0210 23:39:58.230112 9 log.go:172] (0xc002dd8630) (0xc0027e6c80) Stream added, broadcasting: 3 I0210 23:39:58.232598 9 log.go:172] (0xc002dd8630) Reply frame received for 3 I0210 23:39:58.232654 9 log.go:172] (0xc002dd8630) (0xc000f75c20) Create stream I0210 23:39:58.232663 9 log.go:172] (0xc002dd8630) (0xc000f75c20) Stream added, broadcasting: 5 I0210 23:39:58.233761 9 log.go:172] (0xc002dd8630) Reply frame received for 5 I0210 23:39:59.319152 9 log.go:172] (0xc002dd8630) Data frame received for 3 I0210 23:39:59.319342 9 log.go:172] (0xc0027e6c80) (3) Data frame handling I0210 23:39:59.319427 9 log.go:172] (0xc0027e6c80) (3) Data frame sent I0210 23:39:59.416466 9 log.go:172] (0xc002dd8630) Data frame received for 1 I0210 23:39:59.416970 9 log.go:172] (0xc002dd8630) (0xc000f75c20) Stream removed, broadcasting: 5 I0210 23:39:59.417029 9 log.go:172] (0xc000f75360) (1) Data frame handling I0210 23:39:59.417052 9 log.go:172] (0xc000f75360) (1) Data frame sent I0210 23:39:59.417136 9 log.go:172] (0xc002dd8630) (0xc0027e6c80) Stream removed, broadcasting: 3 I0210 23:39:59.417167 9 log.go:172] (0xc002dd8630) (0xc000f75360) Stream removed, broadcasting: 1 I0210 23:39:59.417191 9 log.go:172] (0xc002dd8630) Go away received I0210 23:39:59.417610 9 log.go:172] (0xc002dd8630) (0xc000f75360) Stream removed, broadcasting: 1 I0210 23:39:59.417647 9 log.go:172] (0xc002dd8630) (0xc0027e6c80) Stream removed, broadcasting: 3 I0210 23:39:59.417690 9 log.go:172] (0xc002dd8630) (0xc000f75c20) Stream removed, broadcasting: 5 Feb 10 23:39:59.417: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:39:59.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7463" for this suite. • [SLOW TEST:39.028 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":2,"skipped":9,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:39:59.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 10 23:40:00.590: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 10 23:40:02.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:04.701: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:06.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:11.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:13.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:14.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:40:16.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716974800, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 10 23:40:19.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:40:20.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6965" for this suite. STEP: Destroying namespace "webhook-6965-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:21.018 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":3,"skipped":13,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:40:20.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-807d2cf0-d021-4b13-b727-fa881cc8cda9 STEP: Creating configMap with name cm-test-opt-upd-e48082e7-fa3c-4e1d-8474-459d5a73099d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-807d2cf0-d021-4b13-b727-fa881cc8cda9 STEP: Updating configmap cm-test-opt-upd-e48082e7-fa3c-4e1d-8474-459d5a73099d STEP: Creating configMap with name cm-test-opt-create-fbea5e72-feea-4017-9671-28c12530a685 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:40:36.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2421" for this suite. • [SLOW TEST:16.417 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:40:36.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1504 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-1504 Feb 10 23:40:36.997: INFO: Found 0 stateful pods, waiting for 1 Feb 10 23:40:47.527: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 10 23:40:57.005: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 10 23:40:57.041: INFO: Deleting all statefulset in ns statefulset-1504 Feb 10 23:40:57.064: INFO: Scaling statefulset ss to 0 Feb 10 23:41:17.129: INFO: Waiting for statefulset status.replicas updated to 0 Feb 10 23:41:17.135: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:41:17.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1504" for this suite. • [SLOW TEST:40.310 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":5,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:41:17.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:41:17.436: INFO: Waiting up to 5m0s for pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b" in namespace "downward-api-8678" to be "success or failure" Feb 10 23:41:17.459: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.9438ms Feb 10 23:41:19.467: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030624169s Feb 10 23:41:21.554: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118222053s Feb 10 23:41:23.563: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127269313s Feb 10 23:41:25.573: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136532157s STEP: Saw pod success Feb 10 23:41:25.573: INFO: Pod "downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b" satisfied condition "success or failure" Feb 10 23:41:25.580: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b container client-container: STEP: delete the pod Feb 10 23:41:25.687: INFO: Waiting for pod downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b to disappear Feb 10 23:41:25.703: INFO: Pod downwardapi-volume-786b4a61-e0ba-4d7a-9e52-942fb318420b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:41:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8678" for this suite. • [SLOW TEST:8.535 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":61,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:41:25.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-359e01de-e36a-4a89-afe2-4a6515000924 STEP: Creating a pod to test consume configMaps Feb 10 23:41:25.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35" in namespace "projected-9166" to be "success or failure" Feb 10 23:41:25.953: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35": Phase="Pending", Reason="", readiness=false. Elapsed: 34.079418ms Feb 10 23:41:27.962: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042582947s Feb 10 23:41:29.993: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073584841s Feb 10 23:41:31.999: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079872264s Feb 10 23:41:34.010: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091015955s STEP: Saw pod success Feb 10 23:41:34.011: INFO: Pod "pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35" satisfied condition "success or failure" Feb 10 23:41:34.016: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35 container projected-configmap-volume-test: STEP: delete the pod Feb 10 23:41:34.077: INFO: Waiting for pod pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35 to disappear Feb 10 23:41:34.130: INFO: Pod pod-projected-configmaps-808935f5-7b23-4448-95ad-c34857c17b35 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:41:34.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9166" for this suite. • [SLOW TEST:8.421 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":61,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:41:34.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 10 23:41:34.363: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5572 /api/v1/namespaces/watch-5572/configmaps/e2e-watch-test-watch-closed 8f17e321-93b2-489f-a77e-48326601b181 7629076 0 2020-02-10 23:41:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:41:34.364: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5572 /api/v1/namespaces/watch-5572/configmaps/e2e-watch-test-watch-closed 8f17e321-93b2-489f-a77e-48326601b181 7629077 0 2020-02-10 23:41:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 10 23:41:34.428: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5572 /api/v1/namespaces/watch-5572/configmaps/e2e-watch-test-watch-closed 8f17e321-93b2-489f-a77e-48326601b181 7629078 0 2020-02-10 23:41:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:41:34.428: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5572 /api/v1/namespaces/watch-5572/configmaps/e2e-watch-test-watch-closed 8f17e321-93b2-489f-a77e-48326601b181 7629079 0 2020-02-10 23:41:34 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:41:34.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5572" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":8,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:41:34.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Feb 10 23:41:34.646: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6695" to be "success or failure" Feb 10 23:41:34.689: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 42.744445ms Feb 10 23:41:36.697: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051093821s Feb 10 23:41:38.708: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061634801s Feb 10 23:41:40.715: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068269118s Feb 10 23:41:42.722: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075431195s Feb 10 23:41:44.729: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082139237s STEP: Saw pod success Feb 10 23:41:44.729: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 10 23:41:44.732: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 10 23:41:44.830: INFO: Waiting for pod pod-host-path-test to disappear Feb 10 23:41:44.839: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:41:44.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6695" for this suite. • [SLOW TEST:10.414 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":9,"skipped":88,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:41:44.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-724 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Feb 10 23:41:45.007: INFO: Found 0 stateful pods, waiting for 3 Feb 10 23:41:55.634: INFO: Found 2 stateful pods, waiting for 3 Feb 10 23:42:05.016: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 10 23:42:05.017: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 10 23:42:05.017: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 10 23:42:15.017: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 10 23:42:15.018: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 10 23:42:15.018: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 10 23:42:15.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-724 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 10 23:42:17.253: INFO: stderr: "I0210 23:42:17.045570 34 log.go:172] (0xc000840c60) (0xc000665f40) Create stream\nI0210 23:42:17.045845 34 log.go:172] (0xc000840c60) (0xc000665f40) Stream added, broadcasting: 1\nI0210 23:42:17.059232 34 log.go:172] (0xc000840c60) Reply frame received for 1\nI0210 23:42:17.059303 34 log.go:172] (0xc000840c60) (0xc000610780) Create stream\nI0210 23:42:17.059319 34 log.go:172] (0xc000840c60) (0xc000610780) Stream added, broadcasting: 3\nI0210 23:42:17.060892 34 log.go:172] (0xc000840c60) Reply frame received for 3\nI0210 23:42:17.060924 34 log.go:172] (0xc000840c60) (0xc0004a5400) Create stream\nI0210 23:42:17.060933 34 log.go:172] (0xc000840c60) (0xc0004a5400) Stream added, broadcasting: 5\nI0210 23:42:17.062954 34 log.go:172] (0xc000840c60) Reply frame received for 5\nI0210 23:42:17.136423 34 log.go:172] (0xc000840c60) Data frame received for 5\nI0210 23:42:17.136467 34 log.go:172] (0xc0004a5400) (5) Data frame handling\nI0210 23:42:17.136499 34 log.go:172] (0xc0004a5400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0210 23:42:17.162400 34 log.go:172] (0xc000840c60) Data frame received for 3\nI0210 23:42:17.162426 34 log.go:172] (0xc000610780) (3) Data frame handling\nI0210 23:42:17.162462 34 log.go:172] (0xc000610780) (3) Data frame sent\nI0210 23:42:17.242870 34 log.go:172] (0xc000840c60) (0xc000610780) Stream removed, broadcasting: 3\nI0210 23:42:17.243125 34 log.go:172] (0xc000840c60) Data frame received for 1\nI0210 23:42:17.243206 34 log.go:172] (0xc000840c60) (0xc0004a5400) Stream removed, broadcasting: 5\nI0210 23:42:17.243267 34 log.go:172] (0xc000665f40) (1) Data frame handling\nI0210 23:42:17.243288 34 log.go:172] (0xc000665f40) (1) Data frame sent\nI0210 23:42:17.243303 34 log.go:172] (0xc000840c60) (0xc000665f40) Stream removed, broadcasting: 1\nI0210 23:42:17.243331 34 log.go:172] (0xc000840c60) Go away received\nI0210 23:42:17.244141 34 log.go:172] (0xc000840c60) (0xc000665f40) Stream removed, broadcasting: 1\nI0210 23:42:17.244164 34 log.go:172] (0xc000840c60) (0xc000610780) Stream removed, broadcasting: 3\nI0210 23:42:17.244177 34 log.go:172] (0xc000840c60) (0xc0004a5400) Stream removed, broadcasting: 5\n" Feb 10 23:42:17.254: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 10 23:42:17.254: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 10 23:42:27.300: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 10 23:42:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-724 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 10 23:42:38.321: INFO: stderr: "I0210 23:42:38.143064 64 log.go:172] (0xc0008e4b00) (0xc0005cc000) Create stream\nI0210 23:42:38.143198 64 log.go:172] (0xc0008e4b00) (0xc0005cc000) Stream added, broadcasting: 1\nI0210 23:42:38.147591 64 log.go:172] (0xc0008e4b00) Reply frame received for 1\nI0210 23:42:38.147632 64 log.go:172] (0xc0008e4b00) (0xc000645c20) Create stream\nI0210 23:42:38.147649 64 log.go:172] (0xc0008e4b00) (0xc000645c20) Stream added, broadcasting: 3\nI0210 23:42:38.149809 64 log.go:172] (0xc0008e4b00) Reply frame received for 3\nI0210 23:42:38.149831 64 log.go:172] (0xc0008e4b00) (0xc0005cc140) Create stream\nI0210 23:42:38.149843 64 log.go:172] (0xc0008e4b00) (0xc0005cc140) Stream added, broadcasting: 5\nI0210 23:42:38.151730 64 log.go:172] (0xc0008e4b00) Reply frame received for 5\nI0210 23:42:38.221837 64 log.go:172] (0xc0008e4b00) Data frame received for 5\nI0210 23:42:38.221874 64 log.go:172] (0xc0005cc140) (5) Data frame handling\nI0210 23:42:38.221893 64 log.go:172] (0xc0005cc140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0210 23:42:38.223089 64 log.go:172] (0xc0008e4b00) Data frame received for 3\nI0210 23:42:38.223108 64 log.go:172] (0xc000645c20) (3) Data frame handling\nI0210 23:42:38.223121 64 log.go:172] (0xc000645c20) (3) Data frame sent\nI0210 23:42:38.309089 64 log.go:172] (0xc0008e4b00) (0xc000645c20) Stream removed, broadcasting: 3\nI0210 23:42:38.309146 64 log.go:172] (0xc0008e4b00) Data frame received for 1\nI0210 23:42:38.309174 64 log.go:172] (0xc0005cc000) (1) Data frame handling\nI0210 23:42:38.309192 64 log.go:172] (0xc0005cc000) (1) Data frame sent\nI0210 23:42:38.309206 64 log.go:172] (0xc0008e4b00) (0xc0005cc140) Stream removed, broadcasting: 5\nI0210 23:42:38.309254 64 log.go:172] (0xc0008e4b00) (0xc0005cc000) Stream removed, broadcasting: 1\nI0210 23:42:38.309274 64 log.go:172] (0xc0008e4b00) Go away received\nI0210 23:42:38.310022 64 log.go:172] (0xc0008e4b00) (0xc0005cc000) Stream removed, broadcasting: 1\nI0210 23:42:38.310041 64 log.go:172] (0xc0008e4b00) (0xc000645c20) Stream removed, broadcasting: 3\nI0210 23:42:38.310050 64 log.go:172] (0xc0008e4b00) (0xc0005cc140) Stream removed, broadcasting: 5\n" Feb 10 23:42:38.321: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 10 23:42:38.321: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 10 23:42:48.352: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:42:48.352: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:42:48.352: INFO: Waiting for Pod statefulset-724/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:42:58.365: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:42:58.365: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:42:58.365: INFO: Waiting for Pod statefulset-724/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:43:08.697: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:43:08.697: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:43:18.373: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:43:18.373: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 10 23:43:28.438: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update STEP: Rolling back to a previous revision Feb 10 23:43:38.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-724 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 10 23:43:38.845: INFO: stderr: "I0210 23:43:38.617106 85 log.go:172] (0xc0008fedc0) (0xc0008cc140) Create stream\nI0210 23:43:38.617344 85 log.go:172] (0xc0008fedc0) (0xc0008cc140) Stream added, broadcasting: 1\nI0210 23:43:38.622530 85 log.go:172] (0xc0008fedc0) Reply frame received for 1\nI0210 23:43:38.622622 85 log.go:172] (0xc0008fedc0) (0xc000a12000) Create stream\nI0210 23:43:38.622655 85 log.go:172] (0xc0008fedc0) (0xc000a12000) Stream added, broadcasting: 3\nI0210 23:43:38.623757 85 log.go:172] (0xc0008fedc0) Reply frame received for 3\nI0210 23:43:38.623774 85 log.go:172] (0xc0008fedc0) (0xc0008cc1e0) Create stream\nI0210 23:43:38.623783 85 log.go:172] (0xc0008fedc0) (0xc0008cc1e0) Stream added, broadcasting: 5\nI0210 23:43:38.626274 85 log.go:172] (0xc0008fedc0) Reply frame received for 5\nI0210 23:43:38.713282 85 log.go:172] (0xc0008fedc0) Data frame received for 5\nI0210 23:43:38.713334 85 log.go:172] (0xc0008cc1e0) (5) Data frame handling\nI0210 23:43:38.713352 85 log.go:172] (0xc0008cc1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0210 23:43:38.757120 85 log.go:172] (0xc0008fedc0) Data frame received for 3\nI0210 23:43:38.757161 85 log.go:172] (0xc000a12000) (3) Data frame handling\nI0210 23:43:38.757181 85 log.go:172] (0xc000a12000) (3) Data frame sent\nI0210 23:43:38.836814 85 log.go:172] (0xc0008fedc0) Data frame received for 1\nI0210 23:43:38.836891 85 log.go:172] (0xc0008fedc0) (0xc000a12000) Stream removed, broadcasting: 3\nI0210 23:43:38.836949 85 log.go:172] (0xc0008cc140) (1) Data frame handling\nI0210 23:43:38.836984 85 log.go:172] (0xc0008cc140) (1) Data frame sent\nI0210 23:43:38.837086 85 log.go:172] (0xc0008fedc0) (0xc0008cc140) Stream removed, broadcasting: 1\nI0210 23:43:38.837427 85 log.go:172] (0xc0008fedc0) (0xc0008cc1e0) Stream removed, broadcasting: 5\nI0210 23:43:38.837895 85 log.go:172] (0xc0008fedc0) (0xc0008cc140) Stream removed, broadcasting: 1\nI0210 23:43:38.837928 85 log.go:172] (0xc0008fedc0) (0xc000a12000) Stream removed, broadcasting: 3\nI0210 23:43:38.837942 85 log.go:172] (0xc0008fedc0) (0xc0008cc1e0) Stream removed, broadcasting: 5\n" Feb 10 23:43:38.845: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 10 23:43:38.845: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 10 23:43:38.914: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 10 23:43:48.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-724 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 10 23:43:49.355: INFO: stderr: "I0210 23:43:49.172076 102 log.go:172] (0xc0006b6000) (0xc0004d28c0) Create stream\nI0210 23:43:49.172278 102 log.go:172] (0xc0006b6000) (0xc0004d28c0) Stream added, broadcasting: 1\nI0210 23:43:49.177358 102 log.go:172] (0xc0006b6000) Reply frame received for 1\nI0210 23:43:49.177480 102 log.go:172] (0xc0006b6000) (0xc000818000) Create stream\nI0210 23:43:49.177496 102 log.go:172] (0xc0006b6000) (0xc000818000) Stream added, broadcasting: 3\nI0210 23:43:49.181569 102 log.go:172] (0xc0006b6000) Reply frame received for 3\nI0210 23:43:49.181615 102 log.go:172] (0xc0006b6000) (0xc0008180a0) Create stream\nI0210 23:43:49.181625 102 log.go:172] (0xc0006b6000) (0xc0008180a0) Stream added, broadcasting: 5\nI0210 23:43:49.182930 102 log.go:172] (0xc0006b6000) Reply frame received for 5\nI0210 23:43:49.247557 102 log.go:172] (0xc0006b6000) Data frame received for 5\nI0210 23:43:49.247626 102 log.go:172] (0xc0008180a0) (5) Data frame handling\nI0210 23:43:49.247644 102 log.go:172] (0xc0008180a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0210 23:43:49.252205 102 log.go:172] (0xc0006b6000) Data frame received for 3\nI0210 23:43:49.252332 102 log.go:172] (0xc000818000) (3) Data frame handling\nI0210 23:43:49.252423 102 log.go:172] (0xc000818000) (3) Data frame sent\nI0210 23:43:49.345949 102 log.go:172] (0xc0006b6000) (0xc000818000) Stream removed, broadcasting: 3\nI0210 23:43:49.346386 102 log.go:172] (0xc0006b6000) Data frame received for 1\nI0210 23:43:49.346406 102 log.go:172] (0xc0004d28c0) (1) Data frame handling\nI0210 23:43:49.346421 102 log.go:172] (0xc0004d28c0) (1) Data frame sent\nI0210 23:43:49.346435 102 log.go:172] (0xc0006b6000) (0xc0004d28c0) Stream removed, broadcasting: 1\nI0210 23:43:49.347003 102 log.go:172] (0xc0006b6000) (0xc0008180a0) Stream removed, broadcasting: 5\nI0210 23:43:49.347315 102 log.go:172] (0xc0006b6000) Go away received\nI0210 23:43:49.347537 102 log.go:172] (0xc0006b6000) (0xc0004d28c0) Stream removed, broadcasting: 1\nI0210 23:43:49.347569 102 log.go:172] (0xc0006b6000) (0xc000818000) Stream removed, broadcasting: 3\nI0210 23:43:49.347577 102 log.go:172] (0xc0006b6000) (0xc0008180a0) Stream removed, broadcasting: 5\n" Feb 10 23:43:49.355: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 10 23:43:49.355: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 10 23:43:59.397: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:43:59.397: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:43:59.397: INFO: Waiting for Pod statefulset-724/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:43:59.397: INFO: Waiting for Pod statefulset-724/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:44:09.413: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:44:09.414: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:44:09.414: INFO: Waiting for Pod statefulset-724/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:44:19.411: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:44:19.412: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 10 23:44:29.416: INFO: Waiting for StatefulSet statefulset-724/ss2 to complete update Feb 10 23:44:29.416: INFO: Waiting for Pod statefulset-724/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 10 23:44:39.436: INFO: Deleting all statefulset in ns statefulset-724 Feb 10 23:44:39.442: INFO: Scaling statefulset ss2 to 0 Feb 10 23:45:09.483: INFO: Waiting for statefulset status.replicas updated to 0 Feb 10 23:45:09.489: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:45:09.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-724" for this suite. • [SLOW TEST:204.666 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":10,"skipped":88,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:45:09.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 10 23:45:18.252: INFO: Successfully updated pod "pod-update-ffc51b44-aad0-4f45-b465-82b673787ca4" STEP: verifying the updated pod is in kubernetes Feb 10 23:45:18.285: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:45:18.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1261" for this suite. • [SLOW TEST:8.812 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":103,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:45:18.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 10 23:45:25.094: INFO: 0 pods remaining Feb 10 23:45:25.094: INFO: 0 pods has nil DeletionTimestamp Feb 10 23:45:25.094: INFO: STEP: Gathering metrics W0210 23:45:26.074567 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 10 23:45:26.074: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:45:26.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3802" for this suite. • [SLOW TEST:8.066 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":12,"skipped":106,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:45:26.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 10 23:45:26.861: INFO: Waiting up to 5m0s for pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826" in namespace "emptydir-1271" to be "success or failure" Feb 10 23:45:26.935: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 73.543943ms Feb 10 23:45:28.944: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082805535s Feb 10 23:45:31.062: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200792937s Feb 10 23:45:33.398: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537065104s Feb 10 23:45:36.093: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 9.231476217s Feb 10 23:45:38.102: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 11.240688389s Feb 10 23:45:40.121: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 13.259152925s Feb 10 23:45:42.127: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Pending", Reason="", readiness=false. Elapsed: 15.265415427s Feb 10 23:45:44.136: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.274950741s STEP: Saw pod success Feb 10 23:45:44.137: INFO: Pod "pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826" satisfied condition "success or failure" Feb 10 23:45:44.140: INFO: Trying to get logs from node jerma-node pod pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826 container test-container: STEP: delete the pod Feb 10 23:45:44.310: INFO: Waiting for pod pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826 to disappear Feb 10 23:45:44.321: INFO: Pod pod-e7bcf40b-9f3d-4bf4-90d9-36aa2c10d826 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:45:44.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1271" for this suite. • [SLOW TEST:17.936 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":113,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:45:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 10 23:46:00.629: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:00.685: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:02.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:02.693: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:04.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:04.696: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:06.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:06.696: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:08.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:08.697: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:10.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:10.694: INFO: Pod pod-with-prestop-exec-hook still exists Feb 10 23:46:12.686: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 10 23:46:12.698: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:46:12.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4172" for this suite. • [SLOW TEST:28.393 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":14,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:46:12.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 10 23:46:12.861: INFO: Waiting up to 5m0s for pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9" in namespace "downward-api-3003" to be "success or failure" Feb 10 23:46:12.868: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.563316ms Feb 10 23:46:14.875: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013736099s Feb 10 23:46:16.890: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028891297s Feb 10 23:46:20.326: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.465026095s Feb 10 23:46:22.335: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.47383217s STEP: Saw pod success Feb 10 23:46:22.335: INFO: Pod "downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9" satisfied condition "success or failure" Feb 10 23:46:22.341: INFO: Trying to get logs from node jerma-node pod downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9 container dapi-container: STEP: delete the pod Feb 10 23:46:22.388: INFO: Waiting for pod downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9 to disappear Feb 10 23:46:22.396: INFO: Pod downward-api-c5937397-c317-43bf-8e06-dbd8ec8afec9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:46:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3003" for this suite. • [SLOW TEST:9.707 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":15,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:46:22.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-e5664445-48df-4c59-896b-8c67a8dc5274 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:46:32.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2319" for this suite. • [SLOW TEST:10.446 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":16,"skipped":160,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:46:32.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-2175/configmap-test-62cf4889-5ee6-4795-9a6c-d5f5299426fe STEP: Creating a pod to test consume configMaps Feb 10 23:46:33.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14" in namespace "configmap-2175" to be "success or failure" Feb 10 23:46:33.059: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 53.807355ms Feb 10 23:46:35.069: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063943287s Feb 10 23:46:37.076: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07010351s Feb 10 23:46:39.712: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.706058998s Feb 10 23:46:41.720: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.713983528s Feb 10 23:46:43.726: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.720312345s STEP: Saw pod success Feb 10 23:46:43.726: INFO: Pod "pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14" satisfied condition "success or failure" Feb 10 23:46:43.730: INFO: Trying to get logs from node jerma-node pod pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14 container env-test: STEP: delete the pod Feb 10 23:46:44.169: INFO: Waiting for pod pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14 to disappear Feb 10 23:46:44.178: INFO: Pod pod-configmaps-8d007beb-ffa6-4c81-98dc-a97d447b6c14 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:46:44.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2175" for this suite. • [SLOW TEST:11.302 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":17,"skipped":172,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:46:44.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5742.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5742.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5742.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5742.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5742.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 10 23:46:56.857: INFO: DNS probes using dns-5742/dns-test-179fa080-40b1-4d05-8ad5-4050e9e11b8a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:46:57.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5742" for this suite. • [SLOW TEST:12.864 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":18,"skipped":174,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:46:57.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-d1e3b831-98c2-4914-aa5b-3f78d976e23a STEP: Creating secret with name s-test-opt-upd-8a6f02e1-6f5f-4e35-9777-b80b256f4d7b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d1e3b831-98c2-4914-aa5b-3f78d976e23a STEP: Updating secret s-test-opt-upd-8a6f02e1-6f5f-4e35-9777-b80b256f4d7b STEP: Creating secret with name s-test-opt-create-a3436f6d-c051-427e-b3f0-6986273d871a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:48:27.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2060" for this suite. • [SLOW TEST:89.973 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":19,"skipped":176,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:48:27.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 10 23:48:27.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6569' Feb 10 23:48:27.344: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 10 23:48:27.344: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Feb 10 23:48:27.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6569' Feb 10 23:48:27.593: INFO: stderr: "" Feb 10 23:48:27.593: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:48:27.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6569" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":280,"completed":20,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:48:27.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 10 23:48:27.713: INFO: Created pod &Pod{ObjectMeta:{dns-4618 dns-4618 /api/v1/namespaces/dns-4618/pods/dns-4618 388ca225-c15d-4141-bcef-38e2cdb7dde2 7630778 0 2020-02-10 23:48:27 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrp9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrp9t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrp9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 10 23:48:27.740: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:29.748: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:31.754: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:33.747: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:35.746: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:37.747: INFO: The status of Pod dns-4618 is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:48:39.747: INFO: The status of Pod dns-4618 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 10 23:48:39.747: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4618 PodName:dns-4618 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:48:39.747: INFO: >>> kubeConfig: /root/.kube/config I0210 23:48:39.819677 9 log.go:172] (0xc002c304d0) (0xc000cd5360) Create stream I0210 23:48:39.819865 9 log.go:172] (0xc002c304d0) (0xc000cd5360) Stream added, broadcasting: 1 I0210 23:48:39.831238 9 log.go:172] (0xc002c304d0) Reply frame received for 1 I0210 23:48:39.831294 9 log.go:172] (0xc002c304d0) (0xc000363220) Create stream I0210 23:48:39.831314 9 log.go:172] (0xc002c304d0) (0xc000363220) Stream added, broadcasting: 3 I0210 23:48:39.834242 9 log.go:172] (0xc002c304d0) Reply frame received for 3 I0210 23:48:39.834463 9 log.go:172] (0xc002c304d0) (0xc000b5df40) Create stream I0210 23:48:39.834484 9 log.go:172] (0xc002c304d0) (0xc000b5df40) Stream added, broadcasting: 5 I0210 23:48:39.839261 9 log.go:172] (0xc002c304d0) Reply frame received for 5 I0210 23:48:39.982828 9 log.go:172] (0xc002c304d0) Data frame received for 3 I0210 23:48:39.983012 9 log.go:172] (0xc000363220) (3) Data frame handling I0210 23:48:39.983089 9 log.go:172] (0xc000363220) (3) Data frame sent I0210 23:48:40.097847 9 log.go:172] (0xc002c304d0) Data frame received for 1 I0210 23:48:40.098230 9 log.go:172] (0xc002c304d0) (0xc000b5df40) Stream removed, broadcasting: 5 I0210 23:48:40.098417 9 log.go:172] (0xc000cd5360) (1) Data frame handling I0210 23:48:40.098448 9 log.go:172] (0xc000cd5360) (1) Data frame sent I0210 23:48:40.098479 9 log.go:172] (0xc002c304d0) (0xc000363220) Stream removed, broadcasting: 3 I0210 23:48:40.098577 9 log.go:172] (0xc002c304d0) (0xc000cd5360) Stream removed, broadcasting: 1 I0210 23:48:40.098629 9 log.go:172] (0xc002c304d0) Go away received I0210 23:48:40.098873 9 log.go:172] (0xc002c304d0) (0xc000cd5360) Stream removed, broadcasting: 1 I0210 23:48:40.098899 9 log.go:172] (0xc002c304d0) (0xc000363220) Stream removed, broadcasting: 3 I0210 23:48:40.098919 9 log.go:172] (0xc002c304d0) (0xc000b5df40) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 10 23:48:40.099: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4618 PodName:dns-4618 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:48:40.099: INFO: >>> kubeConfig: /root/.kube/config I0210 23:48:40.146640 9 log.go:172] (0xc002cb8580) (0xc001847c20) Create stream I0210 23:48:40.146722 9 log.go:172] (0xc002cb8580) (0xc001847c20) Stream added, broadcasting: 1 I0210 23:48:40.149657 9 log.go:172] (0xc002cb8580) Reply frame received for 1 I0210 23:48:40.149712 9 log.go:172] (0xc002cb8580) (0xc000cd5400) Create stream I0210 23:48:40.149725 9 log.go:172] (0xc002cb8580) (0xc000cd5400) Stream added, broadcasting: 3 I0210 23:48:40.151511 9 log.go:172] (0xc002cb8580) Reply frame received for 3 I0210 23:48:40.151541 9 log.go:172] (0xc002cb8580) (0xc000f741e0) Create stream I0210 23:48:40.151549 9 log.go:172] (0xc002cb8580) (0xc000f741e0) Stream added, broadcasting: 5 I0210 23:48:40.154707 9 log.go:172] (0xc002cb8580) Reply frame received for 5 I0210 23:48:40.248729 9 log.go:172] (0xc002cb8580) Data frame received for 3 I0210 23:48:40.248815 9 log.go:172] (0xc000cd5400) (3) Data frame handling I0210 23:48:40.248866 9 log.go:172] (0xc000cd5400) (3) Data frame sent I0210 23:48:40.344369 9 log.go:172] (0xc002cb8580) Data frame received for 1 I0210 23:48:40.344461 9 log.go:172] (0xc002cb8580) (0xc000cd5400) Stream removed, broadcasting: 3 I0210 23:48:40.344549 9 log.go:172] (0xc001847c20) (1) Data frame handling I0210 23:48:40.344580 9 log.go:172] (0xc001847c20) (1) Data frame sent I0210 23:48:40.344623 9 log.go:172] (0xc002cb8580) (0xc001847c20) Stream removed, broadcasting: 1 I0210 23:48:40.344853 9 log.go:172] (0xc002cb8580) (0xc000f741e0) Stream removed, broadcasting: 5 I0210 23:48:40.344910 9 log.go:172] (0xc002cb8580) (0xc001847c20) Stream removed, broadcasting: 1 I0210 23:48:40.344925 9 log.go:172] (0xc002cb8580) (0xc000cd5400) Stream removed, broadcasting: 3 I0210 23:48:40.344942 9 log.go:172] (0xc002cb8580) (0xc000f741e0) Stream removed, broadcasting: 5 I0210 23:48:40.345259 9 log.go:172] (0xc002cb8580) Go away received Feb 10 23:48:40.345: INFO: Deleting pod dns-4618... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:48:40.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4618" for this suite. • [SLOW TEST:12.804 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":21,"skipped":214,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:48:40.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-2385 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2385 STEP: Deleting pre-stop pod Feb 10 23:49:03.748: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:49:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2385" for this suite. • [SLOW TEST:23.454 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":280,"completed":22,"skipped":214,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:49:03.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 10 23:49:04.892: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 10 23:49:06.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975345, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:49:08.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975345, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:49:10.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975345, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975344, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 10 23:49:13.986: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:49:14.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5330" for this suite. STEP: Destroying namespace "webhook-5330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.270 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":23,"skipped":224,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:49:14.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:49:14.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5640" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":24,"skipped":236,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:49:14.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-projected-hbvn STEP: Creating a pod to test atomic-volume-subpath Feb 10 23:49:14.549: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-hbvn" in namespace "subpath-6105" to be "success or failure" Feb 10 23:49:14.560: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.890513ms Feb 10 23:49:16.582: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032950204s Feb 10 23:49:18.594: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04494654s Feb 10 23:49:20.603: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053600297s Feb 10 23:49:22.608: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 8.059053468s Feb 10 23:49:24.621: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 10.071752169s Feb 10 23:49:26.633: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 12.083541005s Feb 10 23:49:28.644: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 14.094919509s Feb 10 23:49:30.650: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 16.101318736s Feb 10 23:49:32.670: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 18.121435409s Feb 10 23:49:34.684: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 20.135249831s Feb 10 23:49:36.690: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 22.14090941s Feb 10 23:49:38.696: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 24.14703419s Feb 10 23:49:40.710: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Running", Reason="", readiness=true. Elapsed: 26.161399702s Feb 10 23:49:42.720: INFO: Pod "pod-subpath-test-projected-hbvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.1711733s STEP: Saw pod success Feb 10 23:49:42.720: INFO: Pod "pod-subpath-test-projected-hbvn" satisfied condition "success or failure" Feb 10 23:49:42.723: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-hbvn container test-container-subpath-projected-hbvn: STEP: delete the pod Feb 10 23:49:42.759: INFO: Waiting for pod pod-subpath-test-projected-hbvn to disappear Feb 10 23:49:42.818: INFO: Pod pod-subpath-test-projected-hbvn no longer exists STEP: Deleting pod pod-subpath-test-projected-hbvn Feb 10 23:49:42.819: INFO: Deleting pod "pod-subpath-test-projected-hbvn" in namespace "subpath-6105" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:49:42.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6105" for this suite. • [SLOW TEST:28.499 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":25,"skipped":241,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:49:42.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 10 23:49:42.987: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631147 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:49:42.988: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631147 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 10 23:49:53.001: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631181 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:49:53.002: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631181 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 10 23:50:03.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631205 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:50:03.036: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631205 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 10 23:50:13.051: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631225 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:50:13.052: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-a b9ef921c-55cc-45b1-87f6-4d148f567d5d 7631225 0 2020-02-10 23:49:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 10 23:50:23.073: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-b d2e0102a-0be9-40d9-89c1-2cd14d396792 7631249 0 2020-02-10 23:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:50:23.074: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-b d2e0102a-0be9-40d9-89c1-2cd14d396792 7631249 0 2020-02-10 23:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 10 23:50:33.088: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-b d2e0102a-0be9-40d9-89c1-2cd14d396792 7631273 0 2020-02-10 23:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 10 23:50:33.089: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6459 /api/v1/namespaces/watch-6459/configmaps/e2e-watch-test-configmap-b d2e0102a-0be9-40d9-89c1-2cd14d396792 7631273 0 2020-02-10 23:50:23 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:50:43.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6459" for this suite. • [SLOW TEST:60.264 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":26,"skipped":244,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:50:43.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 10 23:50:43.220: INFO: Waiting up to 5m0s for pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04" in namespace "downward-api-4613" to be "success or failure" Feb 10 23:50:43.237: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 17.235978ms Feb 10 23:50:45.247: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026626943s Feb 10 23:50:47.254: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034025005s Feb 10 23:50:49.264: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044146012s Feb 10 23:50:51.275: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055474267s STEP: Saw pod success Feb 10 23:50:51.276: INFO: Pod "downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04" satisfied condition "success or failure" Feb 10 23:50:51.287: INFO: Trying to get logs from node jerma-node pod downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04 container dapi-container: STEP: delete the pod Feb 10 23:50:51.468: INFO: Waiting for pod downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04 to disappear Feb 10 23:50:51.476: INFO: Pod downward-api-9818cee6-58bb-43b6-9fb9-f754d5ba1a04 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:50:51.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4613" for this suite. • [SLOW TEST:8.386 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":27,"skipped":263,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:50:51.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 10 23:50:51.681: INFO: namespace kubectl-1267 Feb 10 23:50:51.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1267' Feb 10 23:50:52.144: INFO: stderr: "" Feb 10 23:50:52.144: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 10 23:50:53.157: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:53.157: INFO: Found 0 / 1 Feb 10 23:50:54.152: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:54.152: INFO: Found 0 / 1 Feb 10 23:50:55.156: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:55.156: INFO: Found 0 / 1 Feb 10 23:50:56.157: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:56.157: INFO: Found 0 / 1 Feb 10 23:50:57.157: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:57.157: INFO: Found 0 / 1 Feb 10 23:50:58.153: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:58.153: INFO: Found 0 / 1 Feb 10 23:50:59.174: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:59.174: INFO: Found 1 / 1 Feb 10 23:50:59.174: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 10 23:50:59.177: INFO: Selector matched 1 pods for map[app:agnhost] Feb 10 23:50:59.177: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 10 23:50:59.177: INFO: wait on agnhost-master startup in kubectl-1267 Feb 10 23:50:59.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-wldc6 agnhost-master --namespace=kubectl-1267' Feb 10 23:50:59.340: INFO: stderr: "" Feb 10 23:50:59.340: INFO: stdout: "Paused\n" STEP: exposing RC Feb 10 23:50:59.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1267' Feb 10 23:50:59.596: INFO: stderr: "" Feb 10 23:50:59.596: INFO: stdout: "service/rm2 exposed\n" Feb 10 23:50:59.636: INFO: Service rm2 in namespace kubectl-1267 found. STEP: exposing service Feb 10 23:51:01.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1267' Feb 10 23:51:01.904: INFO: stderr: "" Feb 10 23:51:01.905: INFO: stdout: "service/rm3 exposed\n" Feb 10 23:51:01.971: INFO: Service rm3 in namespace kubectl-1267 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:03.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1267" for this suite. • [SLOW TEST:12.499 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":280,"completed":28,"skipped":277,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:03.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-fb278d94-b555-4d00-a517-dc24e8e65665 STEP: Creating a pod to test consume secrets Feb 10 23:51:04.553: INFO: Waiting up to 5m0s for pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443" in namespace "secrets-459" to be "success or failure" Feb 10 23:51:04.560: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125739ms Feb 10 23:51:07.174: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620750661s Feb 10 23:51:09.219: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665662085s Feb 10 23:51:11.227: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673347678s Feb 10 23:51:13.237: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68372405s Feb 10 23:51:15.243: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.689573852s STEP: Saw pod success Feb 10 23:51:15.243: INFO: Pod "pod-secrets-f0f45823-9227-48be-9866-65f8d5751443" satisfied condition "success or failure" Feb 10 23:51:15.246: INFO: Trying to get logs from node jerma-node pod pod-secrets-f0f45823-9227-48be-9866-65f8d5751443 container secret-volume-test: STEP: delete the pod Feb 10 23:51:15.324: INFO: Waiting for pod pod-secrets-f0f45823-9227-48be-9866-65f8d5751443 to disappear Feb 10 23:51:15.329: INFO: Pod pod-secrets-f0f45823-9227-48be-9866-65f8d5751443 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:15.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-459" for this suite. STEP: Destroying namespace "secret-namespace-2790" for this suite. • [SLOW TEST:11.353 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":29,"skipped":278,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:15.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 10 23:51:15.536: INFO: Waiting up to 5m0s for pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008" in namespace "emptydir-7713" to be "success or failure" Feb 10 23:51:15.551: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.150547ms Feb 10 23:51:17.563: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027269278s Feb 10 23:51:19.576: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039708446s Feb 10 23:51:21.582: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046527913s Feb 10 23:51:23.593: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057417031s STEP: Saw pod success Feb 10 23:51:23.594: INFO: Pod "pod-10206b7c-6fdf-4d71-b343-2dffe6d60008" satisfied condition "success or failure" Feb 10 23:51:23.598: INFO: Trying to get logs from node jerma-node pod pod-10206b7c-6fdf-4d71-b343-2dffe6d60008 container test-container: STEP: delete the pod Feb 10 23:51:23.663: INFO: Waiting for pod pod-10206b7c-6fdf-4d71-b343-2dffe6d60008 to disappear Feb 10 23:51:23.677: INFO: Pod pod-10206b7c-6fdf-4d71-b343-2dffe6d60008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:23.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7713" for this suite. • [SLOW TEST:8.346 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":30,"skipped":287,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:23.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-map-0eedc111-8e99-4873-b9b7-85ab03268ef0 STEP: Creating a pod to test consume secrets Feb 10 23:51:24.110: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94" in namespace "projected-6712" to be "success or failure" Feb 10 23:51:24.113: INFO: Pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94": Phase="Pending", Reason="", readiness=false. Elapsed: 3.100466ms Feb 10 23:51:26.122: INFO: Pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011979526s Feb 10 23:51:28.143: INFO: Pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033320669s Feb 10 23:51:30.153: INFO: Pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043267057s STEP: Saw pod success Feb 10 23:51:30.153: INFO: Pod "pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94" satisfied condition "success or failure" Feb 10 23:51:30.158: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94 container projected-secret-volume-test: STEP: delete the pod Feb 10 23:51:30.303: INFO: Waiting for pod pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94 to disappear Feb 10 23:51:30.312: INFO: Pod pod-projected-secrets-734e1495-b378-4281-8967-57ddafc32d94 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:30.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6712" for this suite. • [SLOW TEST:6.630 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":291,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:30.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 10 23:51:30.427: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 10 23:51:30.447: INFO: Waiting for terminating namespaces to be deleted... Feb 10 23:51:30.449: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 10 23:51:30.455: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.455: INFO: Container kube-proxy ready: true, restart count 0 Feb 10 23:51:30.455: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 10 23:51:30.455: INFO: Container weave ready: true, restart count 1 Feb 10 23:51:30.455: INFO: Container weave-npc ready: true, restart count 0 Feb 10 23:51:30.455: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 10 23:51:30.578: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container kube-scheduler ready: true, restart count 7 Feb 10 23:51:30.579: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container kube-apiserver ready: true, restart count 1 Feb 10 23:51:30.579: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container etcd ready: true, restart count 1 Feb 10 23:51:30.579: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container coredns ready: true, restart count 0 Feb 10 23:51:30.579: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container coredns ready: true, restart count 0 Feb 10 23:51:30.579: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 10 23:51:30.579: INFO: Container weave ready: true, restart count 0 Feb 10 23:51:30.579: INFO: Container weave-npc ready: true, restart count 0 Feb 10 23:51:30.579: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container kube-controller-manager ready: true, restart count 5 Feb 10 23:51:30.579: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 10 23:51:30.579: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 10 23:51:30.814: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 10 23:51:30.814: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 10 23:51:30.814: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 10 23:51:30.814: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 10 23:51:30.827: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e.15f22f8a7da1be74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-558/filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e.15f22f8ba27931f9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e.15f22f8c67778c74], Reason = [Created], Message = [Created container filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e] STEP: Considering event: Type = [Normal], Name = [filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e.15f22f8c8efff2df], Reason = [Started], Message = [Started container filler-pod-288ef5ea-ea17-4eab-afd3-2318fe69e50e] STEP: Considering event: Type = [Normal], Name = [filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1.15f22f8a72d3d390], Reason = [Scheduled], Message = [Successfully assigned sched-pred-558/filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1 to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1.15f22f8b6205d836], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1.15f22f8c0897ab49], Reason = [Created], Message = [Created container filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1] STEP: Considering event: Type = [Normal], Name = [filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1.15f22f8c27561cfb], Reason = [Started], Message = [Started container filler-pod-84ba3024-6ffa-4693-b857-aa8863f969d1] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f22f8cd3ebd553], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f22f8cd5bae8ea], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-558" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:11.904 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":280,"completed":32,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:42.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:51:42.345: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7" in namespace "projected-8831" to be "success or failure" Feb 10 23:51:42.361: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.462912ms Feb 10 23:51:44.407: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06181354s Feb 10 23:51:46.828: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48284223s Feb 10 23:51:48.878: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532579328s Feb 10 23:51:51.008: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Running", Reason="", readiness=true. Elapsed: 8.663088358s Feb 10 23:51:53.016: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.671398089s STEP: Saw pod success Feb 10 23:51:53.017: INFO: Pod "downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7" satisfied condition "success or failure" Feb 10 23:51:53.684: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7 container client-container: STEP: delete the pod Feb 10 23:51:55.115: INFO: Waiting for pod downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7 to disappear Feb 10 23:51:55.206: INFO: Pod downwardapi-volume-97f84ceb-85fb-4efa-b527-84b18cbf1fd7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:51:55.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8831" for this suite. • [SLOW TEST:12.991 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":331,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:51:55.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 10 23:51:55.445: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251" in namespace "security-context-test-53" to be "success or failure" Feb 10 23:51:55.468: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251": Phase="Pending", Reason="", readiness=false. Elapsed: 22.263313ms Feb 10 23:51:57.476: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031049212s Feb 10 23:51:59.484: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03898588s Feb 10 23:52:01.494: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048204449s Feb 10 23:52:03.503: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057777561s Feb 10 23:52:03.504: INFO: Pod "busybox-readonly-false-accafd32-8f0a-47bd-88ca-20cf36911251" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:52:03.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-53" for this suite. • [SLOW TEST:8.295 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":34,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:52:03.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:52:03.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b" in namespace "projected-7619" to be "success or failure" Feb 10 23:52:03.731: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.925054ms Feb 10 23:52:05.741: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03500741s Feb 10 23:52:07.749: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042871749s Feb 10 23:52:09.756: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049732784s Feb 10 23:52:11.764: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057360018s Feb 10 23:52:13.772: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065544608s STEP: Saw pod success Feb 10 23:52:13.772: INFO: Pod "downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b" satisfied condition "success or failure" Feb 10 23:52:13.782: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b container client-container: STEP: delete the pod Feb 10 23:52:13.869: INFO: Waiting for pod downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b to disappear Feb 10 23:52:13.890: INFO: Pod downwardapi-volume-6663324e-355b-46f6-921b-cfa156943b8b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:52:13.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7619" for this suite. • [SLOW TEST:10.386 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":35,"skipped":349,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:52:13.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:52:14.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016" in namespace "projected-8977" to be "success or failure" Feb 10 23:52:14.038: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Pending", Reason="", readiness=false. Elapsed: 17.740714ms Feb 10 23:52:16.434: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413545828s Feb 10 23:52:18.444: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424012994s Feb 10 23:52:20.455: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435516072s Feb 10 23:52:22.467: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447041539s Feb 10 23:52:24.478: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.458001015s STEP: Saw pod success Feb 10 23:52:24.479: INFO: Pod "downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016" satisfied condition "success or failure" Feb 10 23:52:24.485: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016 container client-container: STEP: delete the pod Feb 10 23:52:24.540: INFO: Waiting for pod downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016 to disappear Feb 10 23:52:24.555: INFO: Pod downwardapi-volume-c3eea9de-1e1d-4d06-b750-51b5f2378016 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:52:24.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8977" for this suite. • [SLOW TEST:10.703 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:52:24.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 10 23:52:24.788: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 10 23:52:37.120: INFO: >>> kubeConfig: /root/.kube/config Feb 10 23:52:40.067: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:52:51.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2682" for this suite. • [SLOW TEST:26.568 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":37,"skipped":371,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:52:51.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9458, will wait for the garbage collector to delete the pods Feb 10 23:53:01.328: INFO: Deleting Job.batch foo took: 18.785649ms Feb 10 23:53:01.629: INFO: Terminating Job.batch foo pods took: 300.552675ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:53:42.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9458" for this suite. • [SLOW TEST:51.267 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":38,"skipped":378,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:53:42.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 10 23:53:42.570: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb" in namespace "security-context-test-21" to be "success or failure" Feb 10 23:53:42.586: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.968044ms Feb 10 23:53:44.599: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028118139s Feb 10 23:53:46.608: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037121896s Feb 10 23:53:48.626: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054964654s Feb 10 23:53:50.632: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061261527s Feb 10 23:53:50.632: INFO: Pod "busybox-user-65534-d1642e74-79eb-4197-926c-7ecf8ce3ccdb" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:53:50.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-21" for this suite. • [SLOW TEST:8.209 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":389,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:53:50.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:53:50.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf" in namespace "downward-api-2662" to be "success or failure" Feb 10 23:53:50.920: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 49.214139ms Feb 10 23:53:52.928: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056695264s Feb 10 23:53:54.936: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064557746s Feb 10 23:53:56.942: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070668279s Feb 10 23:53:58.949: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077989037s STEP: Saw pod success Feb 10 23:53:58.949: INFO: Pod "downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf" satisfied condition "success or failure" Feb 10 23:53:58.953: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf container client-container: STEP: delete the pod Feb 10 23:53:59.892: INFO: Waiting for pod downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf to disappear Feb 10 23:53:59.935: INFO: Pod downwardapi-volume-8c99293c-76b7-4f31-8433-f56772bfd7cf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:53:59.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2662" for this suite. • [SLOW TEST:9.305 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:53:59.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 10 23:54:18.221: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:18.221: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:18.273619 9 log.go:172] (0xc0023fbce0) (0xc002d2c320) Create stream I0210 23:54:18.273763 9 log.go:172] (0xc0023fbce0) (0xc002d2c320) Stream added, broadcasting: 1 I0210 23:54:18.279731 9 log.go:172] (0xc0023fbce0) Reply frame received for 1 I0210 23:54:18.279783 9 log.go:172] (0xc0023fbce0) (0xc002d2c460) Create stream I0210 23:54:18.279796 9 log.go:172] (0xc0023fbce0) (0xc002d2c460) Stream added, broadcasting: 3 I0210 23:54:18.281695 9 log.go:172] (0xc0023fbce0) Reply frame received for 3 I0210 23:54:18.281724 9 log.go:172] (0xc0023fbce0) (0xc000c3caa0) Create stream I0210 23:54:18.281744 9 log.go:172] (0xc0023fbce0) (0xc000c3caa0) Stream added, broadcasting: 5 I0210 23:54:18.283181 9 log.go:172] (0xc0023fbce0) Reply frame received for 5 I0210 23:54:18.374978 9 log.go:172] (0xc0023fbce0) Data frame received for 3 I0210 23:54:18.375044 9 log.go:172] (0xc002d2c460) (3) Data frame handling I0210 23:54:18.375063 9 log.go:172] (0xc002d2c460) (3) Data frame sent I0210 23:54:18.466105 9 log.go:172] (0xc0023fbce0) (0xc002d2c460) Stream removed, broadcasting: 3 I0210 23:54:18.466326 9 log.go:172] (0xc0023fbce0) Data frame received for 1 I0210 23:54:18.466378 9 log.go:172] (0xc002d2c320) (1) Data frame handling I0210 23:54:18.466415 9 log.go:172] (0xc002d2c320) (1) Data frame sent I0210 23:54:18.466434 9 log.go:172] (0xc0023fbce0) (0xc002d2c320) Stream removed, broadcasting: 1 I0210 23:54:18.466496 9 log.go:172] (0xc0023fbce0) (0xc000c3caa0) Stream removed, broadcasting: 5 I0210 23:54:18.466519 9 log.go:172] (0xc0023fbce0) Go away received I0210 23:54:18.466829 9 log.go:172] (0xc0023fbce0) (0xc002d2c320) Stream removed, broadcasting: 1 I0210 23:54:18.466861 9 log.go:172] (0xc0023fbce0) (0xc002d2c460) Stream removed, broadcasting: 3 I0210 23:54:18.466896 9 log.go:172] (0xc0023fbce0) (0xc000c3caa0) Stream removed, broadcasting: 5 Feb 10 23:54:18.466: INFO: Exec stderr: "" Feb 10 23:54:18.467: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:18.467: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:18.520991 9 log.go:172] (0xc0036bc370) (0xc002d2c6e0) Create stream I0210 23:54:18.521313 9 log.go:172] (0xc0036bc370) (0xc002d2c6e0) Stream added, broadcasting: 1 I0210 23:54:18.526300 9 log.go:172] (0xc0036bc370) Reply frame received for 1 I0210 23:54:18.526374 9 log.go:172] (0xc0036bc370) (0xc001182280) Create stream I0210 23:54:18.526387 9 log.go:172] (0xc0036bc370) (0xc001182280) Stream added, broadcasting: 3 I0210 23:54:18.527709 9 log.go:172] (0xc0036bc370) Reply frame received for 3 I0210 23:54:18.527768 9 log.go:172] (0xc0036bc370) (0xc0027e6000) Create stream I0210 23:54:18.527782 9 log.go:172] (0xc0036bc370) (0xc0027e6000) Stream added, broadcasting: 5 I0210 23:54:18.530115 9 log.go:172] (0xc0036bc370) Reply frame received for 5 I0210 23:54:18.637742 9 log.go:172] (0xc0036bc370) Data frame received for 3 I0210 23:54:18.638001 9 log.go:172] (0xc001182280) (3) Data frame handling I0210 23:54:18.638057 9 log.go:172] (0xc001182280) (3) Data frame sent I0210 23:54:18.742221 9 log.go:172] (0xc0036bc370) Data frame received for 1 I0210 23:54:18.742868 9 log.go:172] (0xc0036bc370) (0xc001182280) Stream removed, broadcasting: 3 I0210 23:54:18.743063 9 log.go:172] (0xc002d2c6e0) (1) Data frame handling I0210 23:54:18.743107 9 log.go:172] (0xc002d2c6e0) (1) Data frame sent I0210 23:54:18.743195 9 log.go:172] (0xc0036bc370) (0xc0027e6000) Stream removed, broadcasting: 5 I0210 23:54:18.743305 9 log.go:172] (0xc0036bc370) (0xc002d2c6e0) Stream removed, broadcasting: 1 I0210 23:54:18.743326 9 log.go:172] (0xc0036bc370) Go away received I0210 23:54:18.744006 9 log.go:172] (0xc0036bc370) (0xc002d2c6e0) Stream removed, broadcasting: 1 I0210 23:54:18.744025 9 log.go:172] (0xc0036bc370) (0xc001182280) Stream removed, broadcasting: 3 I0210 23:54:18.744029 9 log.go:172] (0xc0036bc370) (0xc0027e6000) Stream removed, broadcasting: 5 Feb 10 23:54:18.744: INFO: Exec stderr: "" Feb 10 23:54:18.744: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:18.744: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:18.792437 9 log.go:172] (0xc0036bc9a0) (0xc002d2c960) Create stream I0210 23:54:18.792678 9 log.go:172] (0xc0036bc9a0) (0xc002d2c960) Stream added, broadcasting: 1 I0210 23:54:18.797802 9 log.go:172] (0xc0036bc9a0) Reply frame received for 1 I0210 23:54:18.797891 9 log.go:172] (0xc0036bc9a0) (0xc000c3cb40) Create stream I0210 23:54:18.797915 9 log.go:172] (0xc0036bc9a0) (0xc000c3cb40) Stream added, broadcasting: 3 I0210 23:54:18.798994 9 log.go:172] (0xc0036bc9a0) Reply frame received for 3 I0210 23:54:18.799028 9 log.go:172] (0xc0036bc9a0) (0xc000c3cd20) Create stream I0210 23:54:18.799056 9 log.go:172] (0xc0036bc9a0) (0xc000c3cd20) Stream added, broadcasting: 5 I0210 23:54:18.800166 9 log.go:172] (0xc0036bc9a0) Reply frame received for 5 I0210 23:54:18.873816 9 log.go:172] (0xc0036bc9a0) Data frame received for 3 I0210 23:54:18.873926 9 log.go:172] (0xc000c3cb40) (3) Data frame handling I0210 23:54:18.873956 9 log.go:172] (0xc000c3cb40) (3) Data frame sent I0210 23:54:18.939503 9 log.go:172] (0xc0036bc9a0) Data frame received for 1 I0210 23:54:18.939608 9 log.go:172] (0xc002d2c960) (1) Data frame handling I0210 23:54:18.939629 9 log.go:172] (0xc002d2c960) (1) Data frame sent I0210 23:54:18.939836 9 log.go:172] (0xc0036bc9a0) (0xc002d2c960) Stream removed, broadcasting: 1 I0210 23:54:18.939961 9 log.go:172] (0xc0036bc9a0) (0xc000c3cb40) Stream removed, broadcasting: 3 I0210 23:54:18.939994 9 log.go:172] (0xc0036bc9a0) (0xc000c3cd20) Stream removed, broadcasting: 5 I0210 23:54:18.940042 9 log.go:172] (0xc0036bc9a0) Go away received I0210 23:54:18.940061 9 log.go:172] (0xc0036bc9a0) (0xc002d2c960) Stream removed, broadcasting: 1 I0210 23:54:18.940071 9 log.go:172] (0xc0036bc9a0) (0xc000c3cb40) Stream removed, broadcasting: 3 I0210 23:54:18.940085 9 log.go:172] (0xc0036bc9a0) (0xc000c3cd20) Stream removed, broadcasting: 5 Feb 10 23:54:18.940: INFO: Exec stderr: "" Feb 10 23:54:18.940: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:18.940: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:18.977206 9 log.go:172] (0xc0036bcf20) (0xc002d2caa0) Create stream I0210 23:54:18.977461 9 log.go:172] (0xc0036bcf20) (0xc002d2caa0) Stream added, broadcasting: 1 I0210 23:54:18.984381 9 log.go:172] (0xc0036bcf20) Reply frame received for 1 I0210 23:54:18.984405 9 log.go:172] (0xc0036bcf20) (0xc0027e60a0) Create stream I0210 23:54:18.984413 9 log.go:172] (0xc0036bcf20) (0xc0027e60a0) Stream added, broadcasting: 3 I0210 23:54:18.985940 9 log.go:172] (0xc0036bcf20) Reply frame received for 3 I0210 23:54:18.985968 9 log.go:172] (0xc0036bcf20) (0xc000c3cf00) Create stream I0210 23:54:18.985993 9 log.go:172] (0xc0036bcf20) (0xc000c3cf00) Stream added, broadcasting: 5 I0210 23:54:18.990971 9 log.go:172] (0xc0036bcf20) Reply frame received for 5 I0210 23:54:19.072036 9 log.go:172] (0xc0036bcf20) Data frame received for 3 I0210 23:54:19.072135 9 log.go:172] (0xc0027e60a0) (3) Data frame handling I0210 23:54:19.072160 9 log.go:172] (0xc0027e60a0) (3) Data frame sent I0210 23:54:19.149204 9 log.go:172] (0xc0036bcf20) (0xc0027e60a0) Stream removed, broadcasting: 3 I0210 23:54:19.149419 9 log.go:172] (0xc0036bcf20) Data frame received for 1 I0210 23:54:19.149440 9 log.go:172] (0xc002d2caa0) (1) Data frame handling I0210 23:54:19.149464 9 log.go:172] (0xc002d2caa0) (1) Data frame sent I0210 23:54:19.149503 9 log.go:172] (0xc0036bcf20) (0xc002d2caa0) Stream removed, broadcasting: 1 I0210 23:54:19.150054 9 log.go:172] (0xc0036bcf20) (0xc000c3cf00) Stream removed, broadcasting: 5 I0210 23:54:19.150103 9 log.go:172] (0xc0036bcf20) (0xc002d2caa0) Stream removed, broadcasting: 1 I0210 23:54:19.150120 9 log.go:172] (0xc0036bcf20) (0xc0027e60a0) Stream removed, broadcasting: 3 I0210 23:54:19.150129 9 log.go:172] (0xc0036bcf20) (0xc000c3cf00) Stream removed, broadcasting: 5 Feb 10 23:54:19.150: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 10 23:54:19.150: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:19.150: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:19.221969 9 log.go:172] (0xc0039460b0) (0xc0027e6280) Create stream I0210 23:54:19.222173 9 log.go:172] (0xc0039460b0) (0xc0027e6280) Stream added, broadcasting: 1 I0210 23:54:19.228228 9 log.go:172] (0xc0039460b0) Reply frame received for 1 I0210 23:54:19.228293 9 log.go:172] (0xc0039460b0) (0xc000cc6000) Create stream I0210 23:54:19.228306 9 log.go:172] (0xc0039460b0) (0xc000cc6000) Stream added, broadcasting: 3 I0210 23:54:19.229373 9 log.go:172] (0xc0039460b0) Reply frame received for 3 I0210 23:54:19.229391 9 log.go:172] (0xc0039460b0) (0xc0027e6320) Create stream I0210 23:54:19.229398 9 log.go:172] (0xc0039460b0) (0xc0027e6320) Stream added, broadcasting: 5 I0210 23:54:19.230451 9 log.go:172] (0xc0039460b0) Reply frame received for 5 I0210 23:54:19.322956 9 log.go:172] (0xc0039460b0) Data frame received for 3 I0210 23:54:19.323024 9 log.go:172] (0xc000cc6000) (3) Data frame handling I0210 23:54:19.323049 9 log.go:172] (0xc000cc6000) (3) Data frame sent I0210 23:54:19.389219 9 log.go:172] (0xc0039460b0) (0xc000cc6000) Stream removed, broadcasting: 3 I0210 23:54:19.389404 9 log.go:172] (0xc0039460b0) Data frame received for 1 I0210 23:54:19.389414 9 log.go:172] (0xc0027e6280) (1) Data frame handling I0210 23:54:19.389430 9 log.go:172] (0xc0027e6280) (1) Data frame sent I0210 23:54:19.389437 9 log.go:172] (0xc0039460b0) (0xc0027e6280) Stream removed, broadcasting: 1 I0210 23:54:19.389864 9 log.go:172] (0xc0039460b0) (0xc0027e6320) Stream removed, broadcasting: 5 I0210 23:54:19.389934 9 log.go:172] (0xc0039460b0) (0xc0027e6280) Stream removed, broadcasting: 1 I0210 23:54:19.389943 9 log.go:172] (0xc0039460b0) (0xc000cc6000) Stream removed, broadcasting: 3 I0210 23:54:19.389949 9 log.go:172] (0xc0039460b0) (0xc0027e6320) Stream removed, broadcasting: 5 Feb 10 23:54:19.389: INFO: Exec stderr: "" Feb 10 23:54:19.390: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:19.390: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:19.391688 9 log.go:172] (0xc0039460b0) Go away received I0210 23:54:19.436648 9 log.go:172] (0xc0039be370) (0xc000cc6780) Create stream I0210 23:54:19.436842 9 log.go:172] (0xc0039be370) (0xc000cc6780) Stream added, broadcasting: 1 I0210 23:54:19.442371 9 log.go:172] (0xc0039be370) Reply frame received for 1 I0210 23:54:19.442527 9 log.go:172] (0xc0039be370) (0xc0027e6460) Create stream I0210 23:54:19.442573 9 log.go:172] (0xc0039be370) (0xc0027e6460) Stream added, broadcasting: 3 I0210 23:54:19.445007 9 log.go:172] (0xc0039be370) Reply frame received for 3 I0210 23:54:19.445054 9 log.go:172] (0xc0039be370) (0xc000c3cfa0) Create stream I0210 23:54:19.445077 9 log.go:172] (0xc0039be370) (0xc000c3cfa0) Stream added, broadcasting: 5 I0210 23:54:19.446949 9 log.go:172] (0xc0039be370) Reply frame received for 5 I0210 23:54:19.537557 9 log.go:172] (0xc0039be370) Data frame received for 3 I0210 23:54:19.537799 9 log.go:172] (0xc0027e6460) (3) Data frame handling I0210 23:54:19.537866 9 log.go:172] (0xc0027e6460) (3) Data frame sent I0210 23:54:19.685081 9 log.go:172] (0xc0039be370) Data frame received for 1 I0210 23:54:19.685248 9 log.go:172] (0xc0039be370) (0xc0027e6460) Stream removed, broadcasting: 3 I0210 23:54:19.685332 9 log.go:172] (0xc000cc6780) (1) Data frame handling I0210 23:54:19.685355 9 log.go:172] (0xc000cc6780) (1) Data frame sent I0210 23:54:19.685511 9 log.go:172] (0xc0039be370) (0xc000c3cfa0) Stream removed, broadcasting: 5 I0210 23:54:19.685555 9 log.go:172] (0xc0039be370) (0xc000cc6780) Stream removed, broadcasting: 1 I0210 23:54:19.685736 9 log.go:172] (0xc0039be370) (0xc000cc6780) Stream removed, broadcasting: 1 I0210 23:54:19.685758 9 log.go:172] (0xc0039be370) (0xc0027e6460) Stream removed, broadcasting: 3 I0210 23:54:19.685806 9 log.go:172] (0xc0039be370) (0xc000c3cfa0) Stream removed, broadcasting: 5 Feb 10 23:54:19.686: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 10 23:54:19.686: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:19.687: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:19.812176 9 log.go:172] (0xc0030f2160) (0xc0027e6140) Create stream I0210 23:54:19.812605 9 log.go:172] (0xc0030f2160) (0xc0027e6140) Stream added, broadcasting: 1 I0210 23:54:19.821739 9 log.go:172] (0xc0030f2160) Reply frame received for 1 I0210 23:54:19.821942 9 log.go:172] (0xc0030f2160) (0xc0027e6280) Create stream I0210 23:54:19.821970 9 log.go:172] (0xc0030f2160) (0xc0027e6280) Stream added, broadcasting: 3 I0210 23:54:19.829348 9 log.go:172] (0xc0030f2160) Reply frame received for 3 I0210 23:54:19.829479 9 log.go:172] (0xc0030f2160) (0xc0027e6320) Create stream I0210 23:54:19.829504 9 log.go:172] (0xc0030f2160) (0xc0027e6320) Stream added, broadcasting: 5 I0210 23:54:19.831081 9 log.go:172] (0xc0030f2160) Reply frame received for 5 I0210 23:54:19.945229 9 log.go:172] (0xc0030f2160) Data frame received for 3 I0210 23:54:19.945350 9 log.go:172] (0xc0027e6280) (3) Data frame handling I0210 23:54:19.945373 9 log.go:172] (0xc0027e6280) (3) Data frame sent I0210 23:54:20.022016 9 log.go:172] (0xc0030f2160) Data frame received for 1 I0210 23:54:20.022183 9 log.go:172] (0xc0030f2160) (0xc0027e6320) Stream removed, broadcasting: 5 I0210 23:54:20.022273 9 log.go:172] (0xc0027e6140) (1) Data frame handling I0210 23:54:20.022298 9 log.go:172] (0xc0027e6140) (1) Data frame sent I0210 23:54:20.022351 9 log.go:172] (0xc0030f2160) (0xc0027e6280) Stream removed, broadcasting: 3 I0210 23:54:20.022414 9 log.go:172] (0xc0030f2160) (0xc0027e6140) Stream removed, broadcasting: 1 I0210 23:54:20.022439 9 log.go:172] (0xc0030f2160) Go away received I0210 23:54:20.022672 9 log.go:172] (0xc0030f2160) (0xc0027e6140) Stream removed, broadcasting: 1 I0210 23:54:20.022693 9 log.go:172] (0xc0030f2160) (0xc0027e6280) Stream removed, broadcasting: 3 I0210 23:54:20.022707 9 log.go:172] (0xc0030f2160) (0xc0027e6320) Stream removed, broadcasting: 5 Feb 10 23:54:20.022: INFO: Exec stderr: "" Feb 10 23:54:20.022: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:20.023: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:20.063424 9 log.go:172] (0xc00281a000) (0xc000192460) Create stream I0210 23:54:20.063724 9 log.go:172] (0xc00281a000) (0xc000192460) Stream added, broadcasting: 1 I0210 23:54:20.069152 9 log.go:172] (0xc00281a000) Reply frame received for 1 I0210 23:54:20.069271 9 log.go:172] (0xc00281a000) (0xc000103040) Create stream I0210 23:54:20.069286 9 log.go:172] (0xc00281a000) (0xc000103040) Stream added, broadcasting: 3 I0210 23:54:20.070876 9 log.go:172] (0xc00281a000) Reply frame received for 3 I0210 23:54:20.070908 9 log.go:172] (0xc00281a000) (0xc0027e6460) Create stream I0210 23:54:20.070925 9 log.go:172] (0xc00281a000) (0xc0027e6460) Stream added, broadcasting: 5 I0210 23:54:20.072133 9 log.go:172] (0xc00281a000) Reply frame received for 5 I0210 23:54:20.142791 9 log.go:172] (0xc00281a000) Data frame received for 3 I0210 23:54:20.143124 9 log.go:172] (0xc000103040) (3) Data frame handling I0210 23:54:20.143251 9 log.go:172] (0xc000103040) (3) Data frame sent I0210 23:54:20.235939 9 log.go:172] (0xc00281a000) Data frame received for 1 I0210 23:54:20.236104 9 log.go:172] (0xc00281a000) (0xc0027e6460) Stream removed, broadcasting: 5 I0210 23:54:20.236179 9 log.go:172] (0xc000192460) (1) Data frame handling I0210 23:54:20.236217 9 log.go:172] (0xc000192460) (1) Data frame sent I0210 23:54:20.236245 9 log.go:172] (0xc00281a000) (0xc000103040) Stream removed, broadcasting: 3 I0210 23:54:20.236557 9 log.go:172] (0xc00281a000) (0xc000192460) Stream removed, broadcasting: 1 I0210 23:54:20.236835 9 log.go:172] (0xc00281a000) Go away received I0210 23:54:20.237235 9 log.go:172] (0xc00281a000) (0xc000192460) Stream removed, broadcasting: 1 I0210 23:54:20.237286 9 log.go:172] (0xc00281a000) (0xc000103040) Stream removed, broadcasting: 3 I0210 23:54:20.237304 9 log.go:172] (0xc00281a000) (0xc0027e6460) Stream removed, broadcasting: 5 Feb 10 23:54:20.237: INFO: Exec stderr: "" Feb 10 23:54:20.237: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:20.237: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:20.295204 9 log.go:172] (0xc002cb8210) (0xc000193ea0) Create stream I0210 23:54:20.295340 9 log.go:172] (0xc002cb8210) (0xc000193ea0) Stream added, broadcasting: 1 I0210 23:54:20.302251 9 log.go:172] (0xc002cb8210) Reply frame received for 1 I0210 23:54:20.302295 9 log.go:172] (0xc002cb8210) (0xc0004319a0) Create stream I0210 23:54:20.302306 9 log.go:172] (0xc002cb8210) (0xc0004319a0) Stream added, broadcasting: 3 I0210 23:54:20.305993 9 log.go:172] (0xc002cb8210) Reply frame received for 3 I0210 23:54:20.306012 9 log.go:172] (0xc002cb8210) (0xc000b5cbe0) Create stream I0210 23:54:20.306019 9 log.go:172] (0xc002cb8210) (0xc000b5cbe0) Stream added, broadcasting: 5 I0210 23:54:20.308887 9 log.go:172] (0xc002cb8210) Reply frame received for 5 I0210 23:54:20.383617 9 log.go:172] (0xc002cb8210) Data frame received for 3 I0210 23:54:20.383783 9 log.go:172] (0xc0004319a0) (3) Data frame handling I0210 23:54:20.383876 9 log.go:172] (0xc0004319a0) (3) Data frame sent I0210 23:54:20.453880 9 log.go:172] (0xc002cb8210) Data frame received for 1 I0210 23:54:20.454105 9 log.go:172] (0xc002cb8210) (0xc0004319a0) Stream removed, broadcasting: 3 I0210 23:54:20.454245 9 log.go:172] (0xc000193ea0) (1) Data frame handling I0210 23:54:20.454291 9 log.go:172] (0xc000193ea0) (1) Data frame sent I0210 23:54:20.454472 9 log.go:172] (0xc002cb8210) (0xc000b5cbe0) Stream removed, broadcasting: 5 I0210 23:54:20.454754 9 log.go:172] (0xc002cb8210) (0xc000193ea0) Stream removed, broadcasting: 1 I0210 23:54:20.454861 9 log.go:172] (0xc002cb8210) Go away received I0210 23:54:20.455250 9 log.go:172] (0xc002cb8210) (0xc000193ea0) Stream removed, broadcasting: 1 I0210 23:54:20.455302 9 log.go:172] (0xc002cb8210) (0xc0004319a0) Stream removed, broadcasting: 3 I0210 23:54:20.455327 9 log.go:172] (0xc002cb8210) (0xc000b5cbe0) Stream removed, broadcasting: 5 Feb 10 23:54:20.455: INFO: Exec stderr: "" Feb 10 23:54:20.455: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6180 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 10 23:54:20.456: INFO: >>> kubeConfig: /root/.kube/config I0210 23:54:20.514094 9 log.go:172] (0xc0030f2420) (0xc0027e6500) Create stream I0210 23:54:20.514304 9 log.go:172] (0xc0030f2420) (0xc0027e6500) Stream added, broadcasting: 1 I0210 23:54:20.520068 9 log.go:172] (0xc0030f2420) Reply frame received for 1 I0210 23:54:20.520253 9 log.go:172] (0xc0030f2420) (0xc000b5d7c0) Create stream I0210 23:54:20.520301 9 log.go:172] (0xc0030f2420) (0xc000b5d7c0) Stream added, broadcasting: 3 I0210 23:54:20.523051 9 log.go:172] (0xc0030f2420) Reply frame received for 3 I0210 23:54:20.523129 9 log.go:172] (0xc0030f2420) (0xc000b5da40) Create stream I0210 23:54:20.523164 9 log.go:172] (0xc0030f2420) (0xc000b5da40) Stream added, broadcasting: 5 I0210 23:54:20.525080 9 log.go:172] (0xc0030f2420) Reply frame received for 5 I0210 23:54:20.624885 9 log.go:172] (0xc0030f2420) Data frame received for 3 I0210 23:54:20.625238 9 log.go:172] (0xc000b5d7c0) (3) Data frame handling I0210 23:54:20.625291 9 log.go:172] (0xc000b5d7c0) (3) Data frame sent I0210 23:54:20.732012 9 log.go:172] (0xc0030f2420) Data frame received for 1 I0210 23:54:20.732191 9 log.go:172] (0xc0030f2420) (0xc000b5da40) Stream removed, broadcasting: 5 I0210 23:54:20.732262 9 log.go:172] (0xc0027e6500) (1) Data frame handling I0210 23:54:20.732305 9 log.go:172] (0xc0027e6500) (1) Data frame sent I0210 23:54:20.732350 9 log.go:172] (0xc0030f2420) (0xc000b5d7c0) Stream removed, broadcasting: 3 I0210 23:54:20.732413 9 log.go:172] (0xc0030f2420) (0xc0027e6500) Stream removed, broadcasting: 1 I0210 23:54:20.732428 9 log.go:172] (0xc0030f2420) Go away received I0210 23:54:20.732590 9 log.go:172] (0xc0030f2420) (0xc0027e6500) Stream removed, broadcasting: 1 I0210 23:54:20.732613 9 log.go:172] (0xc0030f2420) (0xc000b5d7c0) Stream removed, broadcasting: 3 I0210 23:54:20.732625 9 log.go:172] (0xc0030f2420) (0xc000b5da40) Stream removed, broadcasting: 5 Feb 10 23:54:20.732: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:54:20.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6180" for this suite. • [SLOW TEST:20.782 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:54:20.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-4d815fb9-29bd-4067-9889-811dc63d793a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4d815fb9-29bd-4067-9889-811dc63d793a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:54:31.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9533" for this suite. • [SLOW TEST:10.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":42,"skipped":485,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:54:31.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 10 23:54:32.042: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 10 23:54:34.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975671, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:54:37.644: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975671, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:54:38.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975671, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:54:40.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975672, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975671, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 10 23:54:43.893: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 10 23:54:43.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8436-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:54:45.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6573" for this suite. STEP: Destroying namespace "webhook-6573-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.172 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":43,"skipped":485,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:54:45.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:55:22.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1475" for this suite. STEP: Destroying namespace "nsdeletetest-5269" for this suite. Feb 10 23:55:22.646: INFO: Namespace nsdeletetest-5269 was already deleted STEP: Destroying namespace "nsdeletetest-2962" for this suite. • [SLOW TEST:37.432 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":44,"skipped":499,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:55:22.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 10 23:55:34.879: INFO: Successfully updated pod "adopt-release-88tr2" STEP: Checking that the Job readopts the Pod Feb 10 23:55:34.880: INFO: Waiting up to 15m0s for pod "adopt-release-88tr2" in namespace "job-6364" to be "adopted" Feb 10 23:55:34.943: INFO: Pod "adopt-release-88tr2": Phase="Running", Reason="", readiness=true. Elapsed: 62.518509ms Feb 10 23:55:36.994: INFO: Pod "adopt-release-88tr2": Phase="Running", Reason="", readiness=true. Elapsed: 2.11393178s Feb 10 23:55:36.994: INFO: Pod "adopt-release-88tr2" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 10 23:55:37.517: INFO: Successfully updated pod "adopt-release-88tr2" STEP: Checking that the Job releases the Pod Feb 10 23:55:37.517: INFO: Waiting up to 15m0s for pod "adopt-release-88tr2" in namespace "job-6364" to be "released" Feb 10 23:55:37.527: INFO: Pod "adopt-release-88tr2": Phase="Running", Reason="", readiness=true. Elapsed: 9.609621ms Feb 10 23:55:37.527: INFO: Pod "adopt-release-88tr2" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:55:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6364" for this suite. • [SLOW TEST:14.959 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":45,"skipped":500,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:55:37.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Feb 10 23:55:37.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1786' Feb 10 23:55:42.366: INFO: stderr: "" Feb 10 23:55:42.366: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 10 23:55:42.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1786' Feb 10 23:55:42.633: INFO: stderr: "" Feb 10 23:55:42.634: INFO: stdout: "update-demo-nautilus-jcrdf update-demo-nautilus-lr9ml " Feb 10 23:55:42.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcrdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:42.814: INFO: stderr: "" Feb 10 23:55:42.815: INFO: stdout: "" Feb 10 23:55:42.815: INFO: update-demo-nautilus-jcrdf is created but not running Feb 10 23:55:47.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1786' Feb 10 23:55:48.156: INFO: stderr: "" Feb 10 23:55:48.156: INFO: stdout: "update-demo-nautilus-jcrdf update-demo-nautilus-lr9ml " Feb 10 23:55:48.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcrdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:48.347: INFO: stderr: "" Feb 10 23:55:48.347: INFO: stdout: "" Feb 10 23:55:48.347: INFO: update-demo-nautilus-jcrdf is created but not running Feb 10 23:55:53.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1786' Feb 10 23:55:53.437: INFO: stderr: "" Feb 10 23:55:53.437: INFO: stdout: "update-demo-nautilus-jcrdf update-demo-nautilus-lr9ml " Feb 10 23:55:53.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcrdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:53.568: INFO: stderr: "" Feb 10 23:55:53.568: INFO: stdout: "" Feb 10 23:55:53.568: INFO: update-demo-nautilus-jcrdf is created but not running Feb 10 23:55:58.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1786' Feb 10 23:55:58.694: INFO: stderr: "" Feb 10 23:55:58.694: INFO: stdout: "update-demo-nautilus-jcrdf update-demo-nautilus-lr9ml " Feb 10 23:55:58.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcrdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:58.804: INFO: stderr: "" Feb 10 23:55:58.805: INFO: stdout: "true" Feb 10 23:55:58.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jcrdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:58.933: INFO: stderr: "" Feb 10 23:55:58.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 10 23:55:58.933: INFO: validating pod update-demo-nautilus-jcrdf Feb 10 23:55:58.965: INFO: got data: { "image": "nautilus.jpg" } Feb 10 23:55:58.965: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 10 23:55:58.965: INFO: update-demo-nautilus-jcrdf is verified up and running Feb 10 23:55:58.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr9ml -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:59.084: INFO: stderr: "" Feb 10 23:55:59.084: INFO: stdout: "true" Feb 10 23:55:59.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lr9ml -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1786' Feb 10 23:55:59.168: INFO: stderr: "" Feb 10 23:55:59.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 10 23:55:59.168: INFO: validating pod update-demo-nautilus-lr9ml Feb 10 23:55:59.174: INFO: got data: { "image": "nautilus.jpg" } Feb 10 23:55:59.174: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 10 23:55:59.174: INFO: update-demo-nautilus-lr9ml is verified up and running STEP: using delete to clean up resources Feb 10 23:55:59.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1786' Feb 10 23:55:59.307: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 10 23:55:59.307: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 10 23:55:59.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1786' Feb 10 23:55:59.436: INFO: stderr: "No resources found in kubectl-1786 namespace.\n" Feb 10 23:55:59.436: INFO: stdout: "" Feb 10 23:55:59.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1786 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 10 23:55:59.558: INFO: stderr: "" Feb 10 23:55:59.558: INFO: stdout: "update-demo-nautilus-jcrdf\nupdate-demo-nautilus-lr9ml\n" Feb 10 23:56:00.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1786' Feb 10 23:56:00.216: INFO: stderr: "No resources found in kubectl-1786 namespace.\n" Feb 10 23:56:00.216: INFO: stdout: "" Feb 10 23:56:00.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1786 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 10 23:56:00.832: INFO: stderr: "" Feb 10 23:56:00.833: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:56:00.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1786" for this suite. • [SLOW TEST:23.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":280,"completed":46,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:56:00.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 10 23:56:03.301: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 10 23:56:05.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975761, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:56:08.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975761, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:56:09.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975761, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:56:11.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975761, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:56:13.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975763, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975761, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 10 23:56:16.348: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:56:16.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7890" for this suite. STEP: Destroying namespace "webhook-7890-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":47,"skipped":541,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:56:16.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8688 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8688 STEP: Creating statefulset with conflicting port in namespace statefulset-8688 STEP: Waiting until pod test-pod will start running in namespace statefulset-8688 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8688 Feb 10 23:56:31.013: INFO: Observed stateful pod in namespace: statefulset-8688, name: ss-0, uid: a4e37441-dd94-4527-afd1-cb7d4365783d, status phase: Pending. Waiting for statefulset controller to delete. Feb 10 23:56:32.328: INFO: Observed stateful pod in namespace: statefulset-8688, name: ss-0, uid: a4e37441-dd94-4527-afd1-cb7d4365783d, status phase: Failed. Waiting for statefulset controller to delete. Feb 10 23:56:32.425: INFO: Observed stateful pod in namespace: statefulset-8688, name: ss-0, uid: a4e37441-dd94-4527-afd1-cb7d4365783d, status phase: Failed. Waiting for statefulset controller to delete. Feb 10 23:56:32.436: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8688 STEP: Removing pod with conflicting port in namespace statefulset-8688 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8688 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 10 23:56:40.972: INFO: Deleting all statefulset in ns statefulset-8688 Feb 10 23:56:40.979: INFO: Scaling statefulset ss to 0 Feb 10 23:56:51.047: INFO: Waiting for statefulset status.replicas updated to 0 Feb 10 23:56:51.057: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:56:51.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8688" for this suite. • [SLOW TEST:34.440 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":48,"skipped":542,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:56:51.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-023c6a32-5fbe-45ed-a137-60058ad49945 STEP: Creating a pod to test consume secrets Feb 10 23:56:51.255: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938" in namespace "projected-4256" to be "success or failure" Feb 10 23:56:51.283: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Pending", Reason="", readiness=false. Elapsed: 27.568521ms Feb 10 23:56:53.289: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033956969s Feb 10 23:56:55.300: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044657422s Feb 10 23:56:57.313: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058329123s Feb 10 23:56:59.327: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071376179s Feb 10 23:57:01.342: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086850743s STEP: Saw pod success Feb 10 23:57:01.342: INFO: Pod "pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938" satisfied condition "success or failure" Feb 10 23:57:01.347: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938 container projected-secret-volume-test: STEP: delete the pod Feb 10 23:57:01.445: INFO: Waiting for pod pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938 to disappear Feb 10 23:57:01.452: INFO: Pod pod-projected-secrets-9f49e706-57c3-4845-a630-0ff434691938 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:01.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4256" for this suite. • [SLOW TEST:10.361 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":49,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:01.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 10 23:57:01.643: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:57:03.657: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:57:05.652: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Pending, waiting for it to be Running (with Ready = true) Feb 10 23:57:08.684: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:10.344: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:11.651: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:13.656: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:15.651: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:17.654: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:19.652: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:21.652: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:23.658: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:25.654: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = false) Feb 10 23:57:27.654: INFO: The status of Pod test-webserver-809811cb-d25a-40a1-a193-7bcab209c81d is Running (Ready = true) Feb 10 23:57:27.662: INFO: Container started at 2020-02-10 23:57:06 +0000 UTC, pod became ready at 2020-02-10 23:57:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-802" for this suite. • [SLOW TEST:26.203 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":50,"skipped":584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:27.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:41.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3136" for this suite. • [SLOW TEST:13.440 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":51,"skipped":621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:41.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 10 23:57:41.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94" in namespace "projected-2458" to be "success or failure" Feb 10 23:57:41.246: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94": Phase="Pending", Reason="", readiness=false. Elapsed: 14.999195ms Feb 10 23:57:43.258: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026308075s Feb 10 23:57:45.270: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039072841s Feb 10 23:57:47.278: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047034646s Feb 10 23:57:49.289: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057733335s STEP: Saw pod success Feb 10 23:57:49.289: INFO: Pod "downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94" satisfied condition "success or failure" Feb 10 23:57:49.293: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94 container client-container: STEP: delete the pod Feb 10 23:57:49.338: INFO: Waiting for pod downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94 to disappear Feb 10 23:57:49.379: INFO: Pod downwardapi-volume-7491c30e-8798-4588-ac33-72df876eab94 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:49.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2458" for this suite. • [SLOW TEST:8.273 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":646,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:49.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-d5ec6ade-8001-4315-9b68-82231a94b714 STEP: Creating a pod to test consume configMaps Feb 10 23:57:49.467: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987" in namespace "projected-2545" to be "success or failure" Feb 10 23:57:49.516: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987": Phase="Pending", Reason="", readiness=false. Elapsed: 48.170526ms Feb 10 23:57:51.520: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052889369s Feb 10 23:57:53.530: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062657044s Feb 10 23:57:55.545: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077882358s Feb 10 23:57:57.557: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089094857s STEP: Saw pod success Feb 10 23:57:57.557: INFO: Pod "pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987" satisfied condition "success or failure" Feb 10 23:57:57.562: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987 container projected-configmap-volume-test: STEP: delete the pod Feb 10 23:57:57.630: INFO: Waiting for pod pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987 to disappear Feb 10 23:57:57.638: INFO: Pod pod-projected-configmaps-eed3a897-bbde-4bae-a0e0-b66afef9d987 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:57.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2545" for this suite. • [SLOW TEST:8.259 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":661,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:57.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:57:57.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2806" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":54,"skipped":671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:57:57.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 10 23:57:58.008: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:58:09.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3889" for this suite. • [SLOW TEST:11.852 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":55,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:58:09.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 10 23:58:10.617: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 10 23:58:12.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:58:14.637: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:58:16.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 10 23:58:18.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716975890, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 10 23:58:21.730: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:58:22.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8972" for this suite. STEP: Destroying namespace "webhook-8972-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.366 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":56,"skipped":731,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:58:22.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6628 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6628 STEP: creating replication controller externalsvc in namespace services-6628 I0210 23:58:22.421460 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6628, replica count: 2 I0210 23:58:25.472772 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0210 23:58:28.473555 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0210 23:58:31.474367 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0210 23:58:34.475071 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 10 23:58:34.566: INFO: Creating new exec pod Feb 10 23:58:42.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6628 execpodtxqtc -- /bin/sh -x -c nslookup clusterip-service' Feb 10 23:58:43.040: INFO: stderr: "I0210 23:58:42.806317 588 log.go:172] (0xc0008c58c0) (0xc00094c500) Create stream\nI0210 23:58:42.806406 588 log.go:172] (0xc0008c58c0) (0xc00094c500) Stream added, broadcasting: 1\nI0210 23:58:42.812504 588 log.go:172] (0xc0008c58c0) Reply frame received for 1\nI0210 23:58:42.812549 588 log.go:172] (0xc0008c58c0) (0xc0005ee780) Create stream\nI0210 23:58:42.812560 588 log.go:172] (0xc0008c58c0) (0xc0005ee780) Stream added, broadcasting: 3\nI0210 23:58:42.813691 588 log.go:172] (0xc0008c58c0) Reply frame received for 3\nI0210 23:58:42.813747 588 log.go:172] (0xc0008c58c0) (0xc0004d7400) Create stream\nI0210 23:58:42.813762 588 log.go:172] (0xc0008c58c0) (0xc0004d7400) Stream added, broadcasting: 5\nI0210 23:58:42.815352 588 log.go:172] (0xc0008c58c0) Reply frame received for 5\nI0210 23:58:42.906526 588 log.go:172] (0xc0008c58c0) Data frame received for 5\nI0210 23:58:42.906591 588 log.go:172] (0xc0004d7400) (5) Data frame handling\nI0210 23:58:42.906617 588 log.go:172] (0xc0004d7400) (5) Data frame sent\n+ nslookup clusterip-service\nI0210 23:58:42.923407 588 log.go:172] (0xc0008c58c0) Data frame received for 3\nI0210 23:58:42.923432 588 log.go:172] (0xc0005ee780) (3) Data frame handling\nI0210 23:58:42.923450 588 log.go:172] (0xc0005ee780) (3) Data frame sent\nI0210 23:58:42.929868 588 log.go:172] (0xc0008c58c0) Data frame received for 3\nI0210 23:58:42.929891 588 log.go:172] (0xc0005ee780) (3) Data frame handling\nI0210 23:58:42.929906 588 log.go:172] (0xc0005ee780) (3) Data frame sent\nI0210 23:58:43.029633 588 log.go:172] (0xc0008c58c0) Data frame received for 1\nI0210 23:58:43.030015 588 log.go:172] (0xc0008c58c0) (0xc0004d7400) Stream removed, broadcasting: 5\nI0210 23:58:43.030066 588 log.go:172] (0xc00094c500) (1) Data frame handling\nI0210 23:58:43.030093 588 log.go:172] (0xc00094c500) (1) Data frame sent\nI0210 23:58:43.030106 588 log.go:172] (0xc0008c58c0) (0xc00094c500) Stream removed, broadcasting: 1\nI0210 23:58:43.030686 588 log.go:172] (0xc0008c58c0) (0xc0005ee780) Stream removed, broadcasting: 3\nI0210 23:58:43.031188 588 log.go:172] (0xc0008c58c0) Go away received\nI0210 23:58:43.031258 588 log.go:172] (0xc0008c58c0) (0xc00094c500) Stream removed, broadcasting: 1\nI0210 23:58:43.031286 588 log.go:172] (0xc0008c58c0) (0xc0005ee780) Stream removed, broadcasting: 3\nI0210 23:58:43.031300 588 log.go:172] (0xc0008c58c0) (0xc0004d7400) Stream removed, broadcasting: 5\n" Feb 10 23:58:43.040: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6628.svc.cluster.local\tcanonical name = externalsvc.services-6628.svc.cluster.local.\nName:\texternalsvc.services-6628.svc.cluster.local\nAddress: 10.96.227.253\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6628, will wait for the garbage collector to delete the pods Feb 10 23:58:43.106: INFO: Deleting ReplicationController externalsvc took: 8.643519ms Feb 10 23:58:43.407: INFO: Terminating ReplicationController externalsvc pods took: 300.557942ms Feb 10 23:59:02.480: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:59:02.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6628" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:40.399 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":57,"skipped":761,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:59:02.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Feb 10 23:59:11.226: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1194 pod-service-account-7f692eb8-830a-4fbf-8bc4-e156571cf6dc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 10 23:59:11.619: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1194 pod-service-account-7f692eb8-830a-4fbf-8bc4-e156571cf6dc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 10 23:59:12.063: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1194 pod-service-account-7f692eb8-830a-4fbf-8bc4-e156571cf6dc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:59:12.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1194" for this suite. • [SLOW TEST:9.939 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":280,"completed":58,"skipped":772,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:59:12.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 10 23:59:12.625: INFO: Waiting up to 5m0s for pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5" in namespace "emptydir-1105" to be "success or failure" Feb 10 23:59:12.649: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.765404ms Feb 10 23:59:14.658: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032458356s Feb 10 23:59:16.667: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041347535s Feb 10 23:59:18.674: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047919019s Feb 10 23:59:20.865: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239164284s STEP: Saw pod success Feb 10 23:59:20.865: INFO: Pod "pod-4581fae8-8969-4776-9170-ccffb09cd9e5" satisfied condition "success or failure" Feb 10 23:59:20.871: INFO: Trying to get logs from node jerma-node pod pod-4581fae8-8969-4776-9170-ccffb09cd9e5 container test-container: STEP: delete the pod Feb 10 23:59:21.377: INFO: Waiting for pod pod-4581fae8-8969-4776-9170-ccffb09cd9e5 to disappear Feb 10 23:59:21.384: INFO: Pod pod-4581fae8-8969-4776-9170-ccffb09cd9e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:59:21.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1105" for this suite. • [SLOW TEST:8.900 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":59,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:59:21.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-bbf4416d-2bd3-4f32-a9e0-d8b2b066410b STEP: Creating a pod to test consume configMaps Feb 10 23:59:21.582: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0" in namespace "projected-5636" to be "success or failure" Feb 10 23:59:21.642: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 59.437374ms Feb 10 23:59:23.658: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076003263s Feb 10 23:59:25.667: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085249784s Feb 10 23:59:27.674: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092032177s Feb 10 23:59:29.687: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104579707s Feb 10 23:59:31.697: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114646933s STEP: Saw pod success Feb 10 23:59:31.697: INFO: Pod "pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0" satisfied condition "success or failure" Feb 10 23:59:31.705: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0 container projected-configmap-volume-test: STEP: delete the pod Feb 10 23:59:31.746: INFO: Waiting for pod pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0 to disappear Feb 10 23:59:31.809: INFO: Pod pod-projected-configmaps-50dc3578-535b-4788-be16-81502ef6d6f0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:59:31.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5636" for this suite. • [SLOW TEST:10.434 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":60,"skipped":793,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:59:31.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 10 23:59:46.181: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 10 23:59:46.191: INFO: Pod pod-with-poststart-http-hook still exists Feb 10 23:59:48.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 10 23:59:48.246: INFO: Pod pod-with-poststart-http-hook still exists Feb 10 23:59:50.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 10 23:59:50.419: INFO: Pod pod-with-poststart-http-hook still exists Feb 10 23:59:52.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 10 23:59:52.208: INFO: Pod pod-with-poststart-http-hook still exists Feb 10 23:59:54.192: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 10 23:59:54.198: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 10 23:59:54.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8110" for this suite. • [SLOW TEST:22.388 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":61,"skipped":805,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 10 23:59:54.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 10 23:59:54.335: INFO: Waiting up to 5m0s for pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c" in namespace "downward-api-556" to be "success or failure" Feb 10 23:59:54.355: INFO: Pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.49504ms Feb 10 23:59:56.371: INFO: Pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035915528s Feb 10 23:59:58.381: INFO: Pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046027977s Feb 11 00:00:00.389: INFO: Pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054453417s STEP: Saw pod success Feb 11 00:00:00.389: INFO: Pod "downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c" satisfied condition "success or failure" Feb 11 00:00:00.394: INFO: Trying to get logs from node jerma-node pod downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c container dapi-container: STEP: delete the pod Feb 11 00:00:00.455: INFO: Waiting for pod downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c to disappear Feb 11 00:00:00.490: INFO: Pod downward-api-debd6d2d-ca6a-43aa-a8a2-18b3bcc5d81c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:00:00.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-556" for this suite. • [SLOW TEST:6.288 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":62,"skipped":811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:00:00.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Feb 11 00:00:00.684: INFO: Waiting up to 5m0s for pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6" in namespace "var-expansion-2319" to be "success or failure" Feb 11 00:00:00.812: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 128.18157ms Feb 11 00:00:02.818: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133821093s Feb 11 00:00:04.837: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153426262s Feb 11 00:00:06.844: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159905223s Feb 11 00:00:08.854: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.170386887s STEP: Saw pod success Feb 11 00:00:08.855: INFO: Pod "var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6" satisfied condition "success or failure" Feb 11 00:00:08.860: INFO: Trying to get logs from node jerma-node pod var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6 container dapi-container: STEP: delete the pod Feb 11 00:00:09.088: INFO: Waiting for pod var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6 to disappear Feb 11 00:00:09.107: INFO: Pod var-expansion-9d7bfc13-a4cc-40d3-afb6-156cf796dbb6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:00:09.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2319" for this suite. • [SLOW TEST:8.615 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":63,"skipped":835,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:00:09.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 11 00:00:09.956: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 11 00:00:11.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976010, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:13.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976010, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:17.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976010, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:17.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976010, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:20.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976010, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976009, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 11 00:00:23.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:00:27.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7108" for this suite. STEP: Destroying namespace "webhook-7108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.690 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":64,"skipped":840,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:00:27.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 11 00:00:28.455: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 11 00:00:30.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:32.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:35.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:37.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:38.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:00:40.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976028, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 11 00:00:43.497: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:00:53.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9465" for this suite. STEP: Destroying namespace "webhook-9465-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:26.054 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":65,"skipped":848,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:00:53.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:00:54.009: INFO: Creating deployment "webserver-deployment" Feb 11 00:00:54.020: INFO: Waiting for observed generation 1 Feb 11 00:00:56.872: INFO: Waiting for all required pods to come up Feb 11 00:00:56.890: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 11 00:01:25.292: INFO: Waiting for deployment "webserver-deployment" to complete Feb 11 00:01:25.305: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 11 00:01:25.318: INFO: Updating deployment webserver-deployment Feb 11 00:01:25.318: INFO: Waiting for observed generation 2 Feb 11 00:01:27.415: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 11 00:01:27.711: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 11 00:01:27.759: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 11 00:01:27.772: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 11 00:01:27.772: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 11 00:01:27.777: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 11 00:01:27.785: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 11 00:01:27.785: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 11 00:01:27.797: INFO: Updating deployment webserver-deployment Feb 11 00:01:27.797: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 11 00:01:28.507: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 11 00:01:28.830: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 11 00:01:32.057: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8550 /apis/apps/v1/namespaces/deployment-8550/deployments/webserver-deployment f395d763-c35c-4adf-93cc-46bf2f483fb0 7634548 3 2020-02-11 00:00:54 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f6b8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-11 00:01:28 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-11 00:01:30 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 11 00:01:32.791: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8550 /apis/apps/v1/namespaces/deployment-8550/replicasets/webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 7634546 3 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f395d763-c35c-4adf-93cc-46bf2f483fb0 0xc002ed25d7 0xc002ed25d8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ed2648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 11 00:01:32.791: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 11 00:01:32.791: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8550 /apis/apps/v1/namespaces/deployment-8550/replicasets/webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 7634524 3 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f395d763-c35c-4adf-93cc-46bf2f483fb0 0xc002ed2517 0xc002ed2518}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ed2578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 11 00:01:32.895: INFO: Pod "webserver-deployment-595b5b9587-2bj7g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2bj7g webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-2bj7g d9246d3a-1dda-48a6-8feb-c725e49862ce 7634512 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed2af7 0xc002ed2af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.896: INFO: Pod "webserver-deployment-595b5b9587-4bpjm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4bpjm webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-4bpjm 490d5c1f-bf96-40d2-94c0-6abb7c77ea98 7634520 0 2020-02-11 00:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed2d47 0xc002ed2d48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.896: INFO: Pod "webserver-deployment-595b5b9587-4c4n2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4c4n2 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-4c4n2 edfcc79e-8bb4-4998-85da-041fe8fcbcf2 7634510 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed2f87 0xc002ed2f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.897: INFO: Pod "webserver-deployment-595b5b9587-4cxs5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4cxs5 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-4cxs5 48eeca9c-23e9-4918-b3c6-8b70bc606367 7634556 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed3177 0xc002ed3178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-11 00:01:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.898: INFO: Pod "webserver-deployment-595b5b9587-6bzs2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6bzs2 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-6bzs2 ce95705a-a0e9-42c6-8299-cb3169700769 7634408 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed33e7 0xc002ed33e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://28bafed7c526ed261dc8af5c434f5061ab2c3530bbe808aa30020b25ee65c271,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.899: INFO: Pod "webserver-deployment-595b5b9587-6n7dc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6n7dc webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-6n7dc 44fde7f6-266e-4929-8314-63a18e056e71 7634378 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed3680 0xc002ed3681}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://74047b17954d7ead3b73e81771cea24c31a375914e5f1ecc2623344f9f912e77,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.899: INFO: Pod "webserver-deployment-595b5b9587-9m4lp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9m4lp webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-9m4lp 45e8dff4-c1ef-4606-bf7c-45974ba0f56f 7634500 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed38a0 0xc002ed38a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.900: INFO: Pod "webserver-deployment-595b5b9587-c722b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c722b webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-c722b 7205d082-337a-4de3-9f75-62d5ed801b45 7634507 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed39b7 0xc002ed39b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.900: INFO: Pod "webserver-deployment-595b5b9587-d98bz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d98bz webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-d98bz 0dc0536b-6a3e-425d-bbf8-079be34e50a1 7634383 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed3c57 0xc002ed3c58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9fa5a42a1de74970e695e92f9bb08574c0d955a3ed2e023d14067b3df2bb2778,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.901: INFO: Pod "webserver-deployment-595b5b9587-dzt9j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dzt9j webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-dzt9j c83bb070-9e60-48a4-ab2d-695497b24f77 7634550 0 2020-02-11 00:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed3dd0 0xc002ed3dd1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.901: INFO: Pod "webserver-deployment-595b5b9587-ghmn5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ghmn5 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-ghmn5 efe60ec7-c2de-4f7e-9c2e-20f3e6685812 7634552 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc002ed3f27 0xc002ed3f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-11 00:01:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.902: INFO: Pod "webserver-deployment-595b5b9587-m824q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m824q webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-m824q 9a5b89bf-9f8a-4417-bc3c-91e39b57fe52 7634513 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6097 0xc0025d6098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.902: INFO: Pod "webserver-deployment-595b5b9587-n75k5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n75k5 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-n75k5 970295ee-a642-47cf-94bf-24f6cfad90f1 7634372 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d61a7 0xc0025d61a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://774c0f91c5f5d92174c51773b1f4c6661332881fc6a1b832d684551206b1b12b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.902: INFO: Pod "webserver-deployment-595b5b9587-n7hjv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n7hjv webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-n7hjv 3563bbaf-3820-4a9b-874a-7c3f08d86f2c 7634369 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6310 0xc0025d6311}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1ce02c74716a7ddeff7a51f6f68dcb2979b57e90112eda89b65471b7011c8c00,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.903: INFO: Pod "webserver-deployment-595b5b9587-nmsgd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nmsgd webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-nmsgd d4d7e763-8482-424f-bc5c-d322a53804da 7634492 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6470 0xc0025d6471}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.903: INFO: Pod "webserver-deployment-595b5b9587-plj7b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-plj7b webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-plj7b ffc23f05-7ace-4ec7-8cd7-a670c4bc23c1 7634375 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6587 0xc0025d6588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c43bf8c64dbf348e100868da2bbe390fe9c03390ec1717c09a66ee47204014e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.903: INFO: Pod "webserver-deployment-595b5b9587-ps622" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ps622 webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-ps622 ae127880-bf8a-4006-8a79-b8602b2c12f7 7634509 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d66f0 0xc0025d66f1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.903: INFO: Pod "webserver-deployment-595b5b9587-t95hk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t95hk webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-t95hk f248c1ac-72d7-45b6-98fb-f60df008956a 7634402 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6817 0xc0025d6818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4029298b8e5367b4d4f7b0dc3c6011f29a71574995989bd0ea3e985fba3da2dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.904: INFO: Pod "webserver-deployment-595b5b9587-vj6rv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vj6rv webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-vj6rv c4037737-65d3-44a1-be70-f859e1ea2ee5 7634405 0 2020-02-11 00:00:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6b90 0xc0025d6b91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:00:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-11 00:00:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:01:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://60295e5a2eff7fb37f85f0498a805ac28187649f3cb1c51e679aafa845a11d53,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.904: INFO: Pod "webserver-deployment-595b5b9587-zvr4h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zvr4h webserver-deployment-595b5b9587- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-595b5b9587-zvr4h 1572fed4-5527-4ae8-b70d-64f01d63fd16 7634557 0 2020-02-11 00:01:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 055b0ee5-9131-4520-b85b-a86ce7a41deb 0xc0025d6da0 0xc0025d6da1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.905: INFO: Pod "webserver-deployment-c7997dcc8-48z96" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-48z96 webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-48z96 b6cad746-467f-4325-afa0-fd4bbe3c5a82 7634515 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7087 0xc0025d7088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.905: INFO: Pod "webserver-deployment-c7997dcc8-5jrsh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5jrsh webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-5jrsh d988535b-75ee-4a9b-b7c5-8123028003e8 7634435 0 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7287 0xc0025d7288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.905: INFO: Pod "webserver-deployment-c7997dcc8-5qbqf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5qbqf webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-5qbqf 7792fc11-5c23-4cd3-aab8-d0f85f648dd5 7634438 0 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7597 0xc0025d7598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-11 00:01:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.906: INFO: Pod "webserver-deployment-c7997dcc8-7bjlm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7bjlm webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-7bjlm abd3fea3-ba6f-44eb-8c42-9b274b1d258b 7634545 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d77f7 0xc0025d77f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-11 00:01:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.906: INFO: Pod "webserver-deployment-c7997dcc8-8trjc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8trjc webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-8trjc 72d44b54-d789-497d-93c5-17e2ac527172 7634534 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7a67 0xc0025d7a68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.907: INFO: Pod "webserver-deployment-c7997dcc8-hktxz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hktxz webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-hktxz 6103522b-1f6e-40de-9ab6-8bbec777d599 7634463 0 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7c47 0xc0025d7c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.907: INFO: Pod "webserver-deployment-c7997dcc8-k8mzs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k8mzs webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-k8mzs 118e6559-5e80-4e41-9684-9c3341a67b25 7634521 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc0025d7ef7 0xc0025d7ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.908: INFO: Pod "webserver-deployment-c7997dcc8-kdspz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kdspz webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-kdspz eaf6e2d4-8b6d-4bc2-b971-587fcf9fb560 7634511 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c087 0xc00218c088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.908: INFO: Pod "webserver-deployment-c7997dcc8-l8f48" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l8f48 webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-l8f48 72f979c9-57fe-4447-b410-3f9a5253ccd9 7634533 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c317 0xc00218c318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.908: INFO: Pod "webserver-deployment-c7997dcc8-mhd4c" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mhd4c webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-mhd4c 1cce7d84-7bc5-418b-88b4-0aea3868f663 7634456 0 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c447 0xc00218c448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 00:01:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.909: INFO: Pod "webserver-deployment-c7997dcc8-t4m5r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-t4m5r webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-t4m5r 1dc8386d-07a9-4272-8804-530e2d7d4957 7634538 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c5c7 0xc00218c5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.909: INFO: Pod "webserver-deployment-c7997dcc8-tdzwl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tdzwl webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-tdzwl 595e48fc-fa10-4889-8403-94dc49ab2b52 7634529 0 2020-02-11 00:01:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c6f7 0xc00218c6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 11 00:01:32.909: INFO: Pod "webserver-deployment-c7997dcc8-xc79r" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xc79r webserver-deployment-c7997dcc8- deployment-8550 /api/v1/namespaces/deployment-8550/pods/webserver-deployment-c7997dcc8-xc79r 1ddc9fca-5300-4478-af5e-6c78e4c3522e 7634461 0 2020-02-11 00:01:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 c72caee1-7f81-4d74-a8d7-f7dea7916ff3 0xc00218c817 0xc00218c818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nlfvt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nlfvt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nlfvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:01:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-11 00:01:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:01:32.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8550" for this suite. • [SLOW TEST:42.110 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":66,"skipped":861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:01:35.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:01:38.949: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 11 00:01:44.220: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 11 00:03:10.278: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 11 00:03:12.284: INFO: Creating deployment "test-rollover-deployment" Feb 11 00:03:12.306: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 11 00:03:14.326: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 11 00:03:14.333: INFO: Ensure that both replica sets have 1 created replica Feb 11 00:03:14.336: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 11 00:03:14.342: INFO: Updating deployment test-rollover-deployment Feb 11 00:03:14.342: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 11 00:03:16.370: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 11 00:03:16.384: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 11 00:03:16.394: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:16.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976194, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:18.405: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:18.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976194, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:20.413: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:20.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976194, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:22.417: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:22.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976202, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:24.404: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:24.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976202, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:26.408: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:26.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976202, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:29.043: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:29.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976202, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:30.407: INFO: all replica sets need to contain the pod-template-hash label Feb 11 00:03:30.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976202, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976192, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:03:32.402: INFO: Feb 11 00:03:32.402: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 11 00:03:32.408: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5553 /apis/apps/v1/namespaces/deployment-5553/deployments/test-rollover-deployment 4ff3cd8c-69bb-4d4f-901c-b61187ff317e 7635031 2 2020-02-11 00:03:12 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003013ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-11 00:03:12 +0000 UTC,LastTransitionTime:2020-02-11 00:03:12 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-11 00:03:32 +0000 UTC,LastTransitionTime:2020-02-11 00:03:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 11 00:03:32.411: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5553 /apis/apps/v1/namespaces/deployment-5553/replicasets/test-rollover-deployment-574d6dfbff 7f2009ca-8e83-4eb7-8d69-86bf2712944d 7635020 2 2020-02-11 00:03:14 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4ff3cd8c-69bb-4d4f-901c-b61187ff317e 0xc002e49f47 0xc002e49f48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e49fb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 11 00:03:32.411: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 11 00:03:32.411: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5553 /apis/apps/v1/namespaces/deployment-5553/replicasets/test-rollover-controller a93eeab0-bebb-492f-8119-b127662727c9 7635029 2 2020-02-11 00:01:38 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4ff3cd8c-69bb-4d4f-901c-b61187ff317e 0xc002e49e5f 0xc002e49e70}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e49ed8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 11 00:03:32.411: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5553 /apis/apps/v1/namespaces/deployment-5553/replicasets/test-rollover-deployment-f6c94f66c 23419350-f8ec-48d7-8ea5-9823eec0aa51 7634978 2 2020-02-11 00:03:12 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4ff3cd8c-69bb-4d4f-901c-b61187ff317e 0xc002e1c020 0xc002e1c021}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e1c098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 11 00:03:32.414: INFO: Pod "test-rollover-deployment-574d6dfbff-27xc7" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-27xc7 test-rollover-deployment-574d6dfbff- deployment-5553 /api/v1/namespaces/deployment-5553/pods/test-rollover-deployment-574d6dfbff-27xc7 329b1290-2130-452b-905c-e73e1582a236 7634998 0 2020-02-11 00:03:14 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 7f2009ca-8e83-4eb7-8d69-86bf2712944d 0xc003013e77 0xc003013e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zqggq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zqggq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zqggq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:03:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:03:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 00:03:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-11 00:03:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 00:03:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://d9490385991be5d33b3b97b6c91055c8035af85ae082e37aa9be7e4b96e105e0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:03:32.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5553" for this suite. • [SLOW TEST:116.443 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":67,"skipped":889,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:03:32.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 11 00:03:32.627: INFO: Waiting up to 5m0s for pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed" in namespace "emptydir-4444" to be "success or failure" Feb 11 00:03:32.651: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 24.394739ms Feb 11 00:03:34.672: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04506717s Feb 11 00:03:36.685: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058387956s Feb 11 00:03:38.722: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095445057s Feb 11 00:03:40.732: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104986442s Feb 11 00:03:42.738: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11099556s Feb 11 00:03:44.747: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.119657493s STEP: Saw pod success Feb 11 00:03:44.747: INFO: Pod "pod-be7631db-d724-48cc-bfc0-a4d7d02271ed" satisfied condition "success or failure" Feb 11 00:03:44.752: INFO: Trying to get logs from node jerma-node pod pod-be7631db-d724-48cc-bfc0-a4d7d02271ed container test-container: STEP: delete the pod Feb 11 00:03:44.835: INFO: Waiting for pod pod-be7631db-d724-48cc-bfc0-a4d7d02271ed to disappear Feb 11 00:03:44.844: INFO: Pod pod-be7631db-d724-48cc-bfc0-a4d7d02271ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:03:44.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4444" for this suite. • [SLOW TEST:12.497 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:03:44.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Feb 11 00:03:45.071: INFO: Waiting up to 5m0s for pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2" in namespace "var-expansion-3892" to be "success or failure" Feb 11 00:03:45.095: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.647985ms Feb 11 00:03:47.100: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028839286s Feb 11 00:03:49.106: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034585728s Feb 11 00:03:51.651: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579796491s Feb 11 00:03:53.867: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796032323s Feb 11 00:03:55.886: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.814932864s Feb 11 00:03:57.900: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.828756623s STEP: Saw pod success Feb 11 00:03:57.900: INFO: Pod "var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2" satisfied condition "success or failure" Feb 11 00:03:57.906: INFO: Trying to get logs from node jerma-node pod var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2 container dapi-container: STEP: delete the pod Feb 11 00:03:57.953: INFO: Waiting for pod var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2 to disappear Feb 11 00:03:58.026: INFO: Pod var-expansion-371d91c0-e0d0-45bf-8332-fbbf442b18a2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:03:58.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3892" for this suite. • [SLOW TEST:13.112 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":69,"skipped":934,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:03:58.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 11 00:04:06.780: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9958291a-293a-4582-9179-9e712e579e94" Feb 11 00:04:06.780: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9958291a-293a-4582-9179-9e712e579e94" in namespace "pods-3906" to be "terminated due to deadline exceeded" Feb 11 00:04:06.790: INFO: Pod "pod-update-activedeadlineseconds-9958291a-293a-4582-9179-9e712e579e94": Phase="Running", Reason="", readiness=true. Elapsed: 9.862867ms Feb 11 00:04:08.800: INFO: Pod "pod-update-activedeadlineseconds-9958291a-293a-4582-9179-9e712e579e94": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020129342s Feb 11 00:04:08.801: INFO: Pod "pod-update-activedeadlineseconds-9958291a-293a-4582-9179-9e712e579e94" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:08.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3906" for this suite. • [SLOW TEST:10.788 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":936,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:08.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 11 00:04:09.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 11 00:04:11.742: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:04:13.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:04:15.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:04:17.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976249, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 11 00:04:20.839: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:04:20.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7459-crds.webhook.example.com via the AdmissionRegistration API Feb 11 00:04:21.483: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:22.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8893" for this suite. STEP: Destroying namespace "webhook-8893-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.948 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":71,"skipped":953,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:22.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-0f9e2bac-e604-450f-9c08-f645c4811363 STEP: Creating a pod to test consume configMaps Feb 11 00:04:22.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42" in namespace "configmap-7604" to be "success or failure" Feb 11 00:04:22.980: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Pending", Reason="", readiness=false. Elapsed: 13.51406ms Feb 11 00:04:24.987: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020992149s Feb 11 00:04:26.996: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029226709s Feb 11 00:04:29.003: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037127446s Feb 11 00:04:31.023: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056961071s Feb 11 00:04:33.031: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06501659s STEP: Saw pod success Feb 11 00:04:33.032: INFO: Pod "pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42" satisfied condition "success or failure" Feb 11 00:04:33.035: INFO: Trying to get logs from node jerma-node pod pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42 container configmap-volume-test: STEP: delete the pod Feb 11 00:04:33.195: INFO: Waiting for pod pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42 to disappear Feb 11 00:04:33.199: INFO: Pod pod-configmaps-efa1a869-b68f-4841-b145-8feacca34b42 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:33.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7604" for this suite. • [SLOW TEST:10.438 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:33.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 11 00:04:33.371: INFO: Waiting up to 5m0s for pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b" in namespace "emptydir-880" to be "success or failure" Feb 11 00:04:33.388: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.362176ms Feb 11 00:04:35.396: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024821084s Feb 11 00:04:37.404: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032236416s Feb 11 00:04:39.412: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041062966s Feb 11 00:04:41.431: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059812347s STEP: Saw pod success Feb 11 00:04:41.431: INFO: Pod "pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b" satisfied condition "success or failure" Feb 11 00:04:41.435: INFO: Trying to get logs from node jerma-node pod pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b container test-container: STEP: delete the pod Feb 11 00:04:41.491: INFO: Waiting for pod pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b to disappear Feb 11 00:04:41.496: INFO: Pod pod-1b5ab0ba-2788-4d1a-b288-ad7f81d2092b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:41.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-880" for this suite. • [SLOW TEST:8.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:41.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-2f5b9021-f8c9-4a39-a6e1-1ca3a37564da STEP: Creating a pod to test consume secrets Feb 11 00:04:41.683: INFO: Waiting up to 5m0s for pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025" in namespace "secrets-9835" to be "success or failure" Feb 11 00:04:41.724: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025": Phase="Pending", Reason="", readiness=false. Elapsed: 41.05637ms Feb 11 00:04:43.733: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050558221s Feb 11 00:04:45.744: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061258717s Feb 11 00:04:47.753: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069980799s Feb 11 00:04:49.761: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078113621s STEP: Saw pod success Feb 11 00:04:49.761: INFO: Pod "pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025" satisfied condition "success or failure" Feb 11 00:04:49.766: INFO: Trying to get logs from node jerma-node pod pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025 container secret-volume-test: STEP: delete the pod Feb 11 00:04:49.832: INFO: Waiting for pod pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025 to disappear Feb 11 00:04:49.844: INFO: Pod pod-secrets-efe27cc2-fc89-4ca6-befb-cff3a5bd0025 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:49.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9835" for this suite. • [SLOW TEST:8.362 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1032,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:49.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 11 00:04:49.945: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 00:04:49.959: INFO: Waiting for terminating namespaces to be deleted... Feb 11 00:04:49.963: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 11 00:04:49.969: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.969: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 00:04:49.969: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 11 00:04:49.969: INFO: Container weave ready: true, restart count 1 Feb 11 00:04:49.969: INFO: Container weave-npc ready: true, restart count 0 Feb 11 00:04:49.969: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 11 00:04:49.991: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container kube-apiserver ready: true, restart count 1 Feb 11 00:04:49.991: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container etcd ready: true, restart count 1 Feb 11 00:04:49.991: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container coredns ready: true, restart count 0 Feb 11 00:04:49.991: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container coredns ready: true, restart count 0 Feb 11 00:04:49.991: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container kube-controller-manager ready: true, restart count 5 Feb 11 00:04:49.991: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 00:04:49.991: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 11 00:04:49.991: INFO: Container weave ready: true, restart count 0 Feb 11 00:04:49.991: INFO: Container weave-npc ready: true, restart count 0 Feb 11 00:04:49.991: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 11 00:04:49.991: INFO: Container kube-scheduler ready: true, restart count 7 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f2304485bd723f], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f2304487d98134], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:51.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-217" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":75,"skipped":1032,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:51.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:04:51.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1586" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":76,"skipped":1058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:04:51.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-c2421a2d-3117-4878-9eaa-e1ed8eac98f9 in namespace container-probe-4381 Feb 11 00:04:59.356: INFO: Started pod test-webserver-c2421a2d-3117-4878-9eaa-e1ed8eac98f9 in namespace container-probe-4381 STEP: checking the pod's current state and verifying that restartCount is present Feb 11 00:04:59.361: INFO: Initial restart count of pod test-webserver-c2421a2d-3117-4878-9eaa-e1ed8eac98f9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:08:59.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4381" for this suite. • [SLOW TEST:248.782 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:08:59.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 11 00:09:00.698: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 11 00:09:02.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:09:04.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:09:06.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:09:08.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716976540, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 11 00:09:11.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:09:11.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:09:13.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3946" for this suite. STEP: Destroying namespace "webhook-3946-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.191 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":78,"skipped":1118,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:09:13.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0211 00:10:00.096028 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 11 00:10:00.096: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:10:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2224" for this suite. • [SLOW TEST:46.923 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":79,"skipped":1127,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:10:00.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 11 00:10:00.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6" in namespace "downward-api-13" to be "success or failure" Feb 11 00:10:00.233: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379185ms Feb 11 00:10:02.239: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018860528s Feb 11 00:10:04.250: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029156308s Feb 11 00:10:06.292: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071488257s Feb 11 00:10:08.490: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269311714s Feb 11 00:10:10.667: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.446903354s Feb 11 00:10:12.693: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.472335235s Feb 11 00:10:15.251: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.030533534s STEP: Saw pod success Feb 11 00:10:15.251: INFO: Pod "downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6" satisfied condition "success or failure" Feb 11 00:10:15.255: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6 container client-container: STEP: delete the pod Feb 11 00:10:15.816: INFO: Waiting for pod downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6 to disappear Feb 11 00:10:15.879: INFO: Pod downwardapi-volume-d1eb894d-377f-4688-9c32-a53bacef0db6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:10:15.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-13" for this suite. • [SLOW TEST:16.164 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":80,"skipped":1146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:10:16.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:10:30.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8871" for this suite. • [SLOW TEST:13.771 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":81,"skipped":1187,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:10:30.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 11 00:10:30.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22" in namespace "projected-8839" to be "success or failure" Feb 11 00:10:30.325: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Pending", Reason="", readiness=false. Elapsed: 115.060704ms Feb 11 00:10:32.336: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125485093s Feb 11 00:10:34.343: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132368072s Feb 11 00:10:36.533: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.32259658s Feb 11 00:10:38.539: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.329285471s Feb 11 00:10:40.552: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.341777355s STEP: Saw pod success Feb 11 00:10:40.552: INFO: Pod "downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22" satisfied condition "success or failure" Feb 11 00:10:40.558: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22 container client-container: STEP: delete the pod Feb 11 00:10:40.667: INFO: Waiting for pod downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22 to disappear Feb 11 00:10:40.672: INFO: Pod downwardapi-volume-2e45157c-cc7e-42ab-b02e-7bc9bd284a22 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:10:40.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8839" for this suite. • [SLOW TEST:10.643 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":82,"skipped":1192,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:10:40.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384 STEP: creating the pod Feb 11 00:10:40.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5849' Feb 11 00:10:43.496: INFO: stderr: "" Feb 11 00:10:43.496: INFO: stdout: "pod/pause created\n" Feb 11 00:10:43.497: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 11 00:10:43.497: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5849" to be "running and ready" Feb 11 00:10:43.516: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.018706ms Feb 11 00:10:45.523: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025920853s Feb 11 00:10:47.530: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033159408s Feb 11 00:10:49.539: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042521427s Feb 11 00:10:51.547: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.050693943s Feb 11 00:10:51.548: INFO: Pod "pause" satisfied condition "running and ready" Feb 11 00:10:51.548: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Feb 11 00:10:51.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5849' Feb 11 00:10:51.718: INFO: stderr: "" Feb 11 00:10:51.719: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 11 00:10:51.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5849' Feb 11 00:10:51.950: INFO: stderr: "" Feb 11 00:10:51.950: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 8s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 11 00:10:51.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5849' Feb 11 00:10:52.079: INFO: stderr: "" Feb 11 00:10:52.079: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 11 00:10:52.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5849' Feb 11 00:10:52.272: INFO: stderr: "" Feb 11 00:10:52.272: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 STEP: using delete to clean up resources Feb 11 00:10:52.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5849' Feb 11 00:10:52.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 11 00:10:52.546: INFO: stdout: "pod \"pause\" force deleted\n" Feb 11 00:10:52.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5849' Feb 11 00:10:52.732: INFO: stderr: "No resources found in kubectl-5849 namespace.\n" Feb 11 00:10:52.733: INFO: stdout: "" Feb 11 00:10:52.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5849 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 11 00:10:52.839: INFO: stderr: "" Feb 11 00:10:52.839: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:10:52.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5849" for this suite. • [SLOW TEST:12.196 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":280,"completed":83,"skipped":1196,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:10:52.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:10:53.091: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 11 00:10:56.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7997 create -f -' Feb 11 00:10:59.397: INFO: stderr: "" Feb 11 00:10:59.398: INFO: stdout: "e2e-test-crd-publish-openapi-9621-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 11 00:10:59.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7997 delete e2e-test-crd-publish-openapi-9621-crds test-cr' Feb 11 00:10:59.586: INFO: stderr: "" Feb 11 00:10:59.586: INFO: stdout: "e2e-test-crd-publish-openapi-9621-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 11 00:10:59.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7997 apply -f -' Feb 11 00:10:59.962: INFO: stderr: "" Feb 11 00:10:59.962: INFO: stdout: "e2e-test-crd-publish-openapi-9621-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 11 00:10:59.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7997 delete e2e-test-crd-publish-openapi-9621-crds test-cr' Feb 11 00:11:00.107: INFO: stderr: "" Feb 11 00:11:00.108: INFO: stdout: "e2e-test-crd-publish-openapi-9621-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 11 00:11:00.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9621-crds' Feb 11 00:11:00.483: INFO: stderr: "" Feb 11 00:11:00.484: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9621-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:11:02.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7997" for this suite. • [SLOW TEST:9.836 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":84,"skipped":1197,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:11:02.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 11 00:11:02.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4149' Feb 11 00:11:02.942: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 00:11:02.942: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740 Feb 11 00:11:05.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4149' Feb 11 00:11:05.120: INFO: stderr: "" Feb 11 00:11:05.120: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:11:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4149" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":280,"completed":85,"skipped":1207,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:11:05.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2ef48ac5-19f5-460e-9e5f-ea1579acb165 in namespace container-probe-3135 Feb 11 00:11:15.313: INFO: Started pod liveness-2ef48ac5-19f5-460e-9e5f-ea1579acb165 in namespace container-probe-3135 STEP: checking the pod's current state and verifying that restartCount is present Feb 11 00:11:15.319: INFO: Initial restart count of pod liveness-2ef48ac5-19f5-460e-9e5f-ea1579acb165 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:15:18.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3135" for this suite. • [SLOW TEST:253.495 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":86,"skipped":1228,"failed":0} [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:15:18.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's command Feb 11 00:15:18.943: INFO: Waiting up to 5m0s for pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42" in namespace "var-expansion-6706" to be "success or failure" Feb 11 00:15:19.020: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 76.26619ms Feb 11 00:15:21.027: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083880274s Feb 11 00:15:23.033: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089511699s Feb 11 00:15:25.074: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130248738s Feb 11 00:15:27.083: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139369571s Feb 11 00:15:29.091: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14773284s Feb 11 00:15:31.097: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.153051494s STEP: Saw pod success Feb 11 00:15:31.097: INFO: Pod "var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42" satisfied condition "success or failure" Feb 11 00:15:31.099: INFO: Trying to get logs from node jerma-node pod var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42 container dapi-container: STEP: delete the pod Feb 11 00:15:31.151: INFO: Waiting for pod var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42 to disappear Feb 11 00:15:31.162: INFO: Pod var-expansion-4f2aa209-a0c7-43b1-95ad-edc6a449ee42 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:15:31.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6706" for this suite. • [SLOW TEST:12.596 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1228,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:15:31.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 11 00:15:31.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6859' Feb 11 00:15:31.409: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 11 00:15:31.409: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604 Feb 11 00:15:34.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6859' Feb 11 00:15:34.928: INFO: stderr: "" Feb 11 00:15:34.928: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:15:34.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6859" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":280,"completed":88,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:15:34.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service nodeport-test with type=NodePort in namespace services-7367 STEP: creating replication controller nodeport-test in namespace services-7367 I0211 00:15:35.469616 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7367, replica count: 2 I0211 00:15:38.521288 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 00:15:41.522797 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 00:15:44.523822 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 00:15:47.524531 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0211 00:15:50.525441 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 11 00:15:50.525: INFO: Creating new exec pod Feb 11 00:15:59.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7367 execpodqp924 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 11 00:16:00.114: INFO: stderr: "I0211 00:15:59.846084 1006 log.go:172] (0xc0001042c0) (0xc0007a0960) Create stream\nI0211 00:15:59.846206 1006 log.go:172] (0xc0001042c0) (0xc0007a0960) Stream added, broadcasting: 1\nI0211 00:15:59.853767 1006 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0211 00:15:59.853850 1006 log.go:172] (0xc0001042c0) (0xc0004fb5e0) Create stream\nI0211 00:15:59.853865 1006 log.go:172] (0xc0001042c0) (0xc0004fb5e0) Stream added, broadcasting: 3\nI0211 00:15:59.863592 1006 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0211 00:15:59.863637 1006 log.go:172] (0xc0001042c0) (0xc0009ec000) Create stream\nI0211 00:15:59.863651 1006 log.go:172] (0xc0001042c0) (0xc0009ec000) Stream added, broadcasting: 5\nI0211 00:15:59.875328 1006 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0211 00:15:59.977725 1006 log.go:172] (0xc0001042c0) Data frame received for 5\nI0211 00:15:59.977791 1006 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0211 00:15:59.977823 1006 log.go:172] (0xc0009ec000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0211 00:16:00.005068 1006 log.go:172] (0xc0001042c0) Data frame received for 5\nI0211 00:16:00.005097 1006 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0211 00:16:00.005113 1006 log.go:172] (0xc0009ec000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0211 00:16:00.103791 1006 log.go:172] (0xc0001042c0) Data frame received for 1\nI0211 00:16:00.103910 1006 log.go:172] (0xc0001042c0) (0xc0004fb5e0) Stream removed, broadcasting: 3\nI0211 00:16:00.104183 1006 log.go:172] (0xc0007a0960) (1) Data frame handling\nI0211 00:16:00.104290 1006 log.go:172] (0xc0007a0960) (1) Data frame sent\nI0211 00:16:00.104386 1006 log.go:172] (0xc0001042c0) (0xc0009ec000) Stream removed, broadcasting: 5\nI0211 00:16:00.104449 1006 log.go:172] (0xc0001042c0) (0xc0007a0960) Stream removed, broadcasting: 1\nI0211 00:16:00.104467 1006 log.go:172] (0xc0001042c0) Go away received\nI0211 00:16:00.106427 1006 log.go:172] (0xc0001042c0) (0xc0007a0960) Stream removed, broadcasting: 1\nI0211 00:16:00.106446 1006 log.go:172] (0xc0001042c0) (0xc0004fb5e0) Stream removed, broadcasting: 3\nI0211 00:16:00.106470 1006 log.go:172] (0xc0001042c0) (0xc0009ec000) Stream removed, broadcasting: 5\n" Feb 11 00:16:00.114: INFO: stdout: "" Feb 11 00:16:00.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7367 execpodqp924 -- /bin/sh -x -c nc -zv -t -w 2 10.96.233.159 80' Feb 11 00:16:00.434: INFO: stderr: "I0211 00:16:00.310705 1028 log.go:172] (0xc0000f4b00) (0xc000966000) Create stream\nI0211 00:16:00.310827 1028 log.go:172] (0xc0000f4b00) (0xc000966000) Stream added, broadcasting: 1\nI0211 00:16:00.315047 1028 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0211 00:16:00.315093 1028 log.go:172] (0xc0000f4b00) (0xc0006d3a40) Create stream\nI0211 00:16:00.315105 1028 log.go:172] (0xc0000f4b00) (0xc0006d3a40) Stream added, broadcasting: 3\nI0211 00:16:00.316078 1028 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0211 00:16:00.316118 1028 log.go:172] (0xc0000f4b00) (0xc00021c000) Create stream\nI0211 00:16:00.316130 1028 log.go:172] (0xc0000f4b00) (0xc00021c000) Stream added, broadcasting: 5\nI0211 00:16:00.317683 1028 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0211 00:16:00.367701 1028 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0211 00:16:00.367765 1028 log.go:172] (0xc00021c000) (5) Data frame handling\nI0211 00:16:00.367797 1028 log.go:172] (0xc00021c000) (5) Data frame sent\nI0211 00:16:00.367806 1028 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0211 00:16:00.367825 1028 log.go:172] (0xc00021c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.233.159 80\nConnection to 10.96.233.159 80 port [tcp/http] succeeded!\nI0211 00:16:00.367862 1028 log.go:172] (0xc00021c000) (5) Data frame sent\nI0211 00:16:00.427057 1028 log.go:172] (0xc0000f4b00) (0xc0006d3a40) Stream removed, broadcasting: 3\nI0211 00:16:00.427250 1028 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0211 00:16:00.427285 1028 log.go:172] (0xc000966000) (1) Data frame handling\nI0211 00:16:00.427339 1028 log.go:172] (0xc000966000) (1) Data frame sent\nI0211 00:16:00.427421 1028 log.go:172] (0xc0000f4b00) (0xc00021c000) Stream removed, broadcasting: 5\nI0211 00:16:00.427505 1028 log.go:172] (0xc0000f4b00) (0xc000966000) Stream removed, broadcasting: 1\nI0211 00:16:00.427517 1028 log.go:172] (0xc0000f4b00) Go away received\nI0211 00:16:00.428012 1028 log.go:172] (0xc0000f4b00) (0xc000966000) Stream removed, broadcasting: 1\nI0211 00:16:00.428025 1028 log.go:172] (0xc0000f4b00) (0xc0006d3a40) Stream removed, broadcasting: 3\nI0211 00:16:00.428034 1028 log.go:172] (0xc0000f4b00) (0xc00021c000) Stream removed, broadcasting: 5\n" Feb 11 00:16:00.434: INFO: stdout: "" Feb 11 00:16:00.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7367 execpodqp924 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32435' Feb 11 00:16:00.898: INFO: stderr: "I0211 00:16:00.694281 1051 log.go:172] (0xc00002c0b0) (0xc0005c2780) Create stream\nI0211 00:16:00.695292 1051 log.go:172] (0xc00002c0b0) (0xc0005c2780) Stream added, broadcasting: 1\nI0211 00:16:00.707881 1051 log.go:172] (0xc00002c0b0) Reply frame received for 1\nI0211 00:16:00.708136 1051 log.go:172] (0xc00002c0b0) (0xc000229400) Create stream\nI0211 00:16:00.708155 1051 log.go:172] (0xc00002c0b0) (0xc000229400) Stream added, broadcasting: 3\nI0211 00:16:00.711176 1051 log.go:172] (0xc00002c0b0) Reply frame received for 3\nI0211 00:16:00.711390 1051 log.go:172] (0xc00002c0b0) (0xc0008b4000) Create stream\nI0211 00:16:00.711421 1051 log.go:172] (0xc00002c0b0) (0xc0008b4000) Stream added, broadcasting: 5\nI0211 00:16:00.713030 1051 log.go:172] (0xc00002c0b0) Reply frame received for 5\nI0211 00:16:00.819968 1051 log.go:172] (0xc00002c0b0) Data frame received for 5\nI0211 00:16:00.820105 1051 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0211 00:16:00.820148 1051 log.go:172] (0xc0008b4000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32435\nI0211 00:16:00.823764 1051 log.go:172] (0xc00002c0b0) Data frame received for 5\nI0211 00:16:00.823781 1051 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0211 00:16:00.823799 1051 log.go:172] (0xc0008b4000) (5) Data frame sent\nConnection to 10.96.2.250 32435 port [tcp/32435] succeeded!\nI0211 00:16:00.889485 1051 log.go:172] (0xc00002c0b0) Data frame received for 1\nI0211 00:16:00.889826 1051 log.go:172] (0xc00002c0b0) (0xc000229400) Stream removed, broadcasting: 3\nI0211 00:16:00.889939 1051 log.go:172] (0xc0005c2780) (1) Data frame handling\nI0211 00:16:00.890035 1051 log.go:172] (0xc0005c2780) (1) Data frame sent\nI0211 00:16:00.890166 1051 log.go:172] (0xc00002c0b0) (0xc0008b4000) Stream removed, broadcasting: 5\nI0211 00:16:00.890252 1051 log.go:172] (0xc00002c0b0) (0xc0005c2780) Stream removed, broadcasting: 1\nI0211 00:16:00.890317 1051 log.go:172] (0xc00002c0b0) Go away received\nI0211 00:16:00.891335 1051 log.go:172] (0xc00002c0b0) (0xc0005c2780) Stream removed, broadcasting: 1\nI0211 00:16:00.891410 1051 log.go:172] (0xc00002c0b0) (0xc000229400) Stream removed, broadcasting: 3\nI0211 00:16:00.891477 1051 log.go:172] (0xc00002c0b0) (0xc0008b4000) Stream removed, broadcasting: 5\n" Feb 11 00:16:00.898: INFO: stdout: "" Feb 11 00:16:00.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7367 execpodqp924 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32435' Feb 11 00:16:01.387: INFO: stderr: "I0211 00:16:01.161009 1071 log.go:172] (0xc00088c9a0) (0xc0008f41e0) Create stream\nI0211 00:16:01.161250 1071 log.go:172] (0xc00088c9a0) (0xc0008f41e0) Stream added, broadcasting: 1\nI0211 00:16:01.170815 1071 log.go:172] (0xc00088c9a0) Reply frame received for 1\nI0211 00:16:01.170894 1071 log.go:172] (0xc00088c9a0) (0xc00041a780) Create stream\nI0211 00:16:01.170908 1071 log.go:172] (0xc00088c9a0) (0xc00041a780) Stream added, broadcasting: 3\nI0211 00:16:01.173255 1071 log.go:172] (0xc00088c9a0) Reply frame received for 3\nI0211 00:16:01.173447 1071 log.go:172] (0xc00088c9a0) (0xc0008f4280) Create stream\nI0211 00:16:01.173489 1071 log.go:172] (0xc00088c9a0) (0xc0008f4280) Stream added, broadcasting: 5\nI0211 00:16:01.175019 1071 log.go:172] (0xc00088c9a0) Reply frame received for 5\nI0211 00:16:01.255314 1071 log.go:172] (0xc00088c9a0) Data frame received for 5\nI0211 00:16:01.255469 1071 log.go:172] (0xc0008f4280) (5) Data frame handling\nI0211 00:16:01.255517 1071 log.go:172] (0xc0008f4280) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32435\nI0211 00:16:01.263375 1071 log.go:172] (0xc00088c9a0) Data frame received for 5\nI0211 00:16:01.263413 1071 log.go:172] (0xc0008f4280) (5) Data frame handling\nI0211 00:16:01.263425 1071 log.go:172] (0xc0008f4280) (5) Data frame sent\nConnection to 10.96.1.234 32435 port [tcp/32435] succeeded!\nI0211 00:16:01.371706 1071 log.go:172] (0xc00088c9a0) Data frame received for 1\nI0211 00:16:01.371861 1071 log.go:172] (0xc00088c9a0) (0xc00041a780) Stream removed, broadcasting: 3\nI0211 00:16:01.372125 1071 log.go:172] (0xc0008f41e0) (1) Data frame handling\nI0211 00:16:01.372213 1071 log.go:172] (0xc0008f41e0) (1) Data frame sent\nI0211 00:16:01.372267 1071 log.go:172] (0xc00088c9a0) (0xc0008f4280) Stream removed, broadcasting: 5\nI0211 00:16:01.372328 1071 log.go:172] (0xc00088c9a0) (0xc0008f41e0) Stream removed, broadcasting: 1\nI0211 00:16:01.372378 1071 log.go:172] (0xc00088c9a0) Go away received\nI0211 00:16:01.373750 1071 log.go:172] (0xc00088c9a0) (0xc0008f41e0) Stream removed, broadcasting: 1\nI0211 00:16:01.373769 1071 log.go:172] (0xc00088c9a0) (0xc00041a780) Stream removed, broadcasting: 3\nI0211 00:16:01.373777 1071 log.go:172] (0xc00088c9a0) (0xc0008f4280) Stream removed, broadcasting: 5\n" Feb 11 00:16:01.387: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:01.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7367" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.464 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":89,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:01.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 11 00:16:01.523: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530" in namespace "projected-5666" to be "success or failure" Feb 11 00:16:01.540: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 16.818009ms Feb 11 00:16:03.710: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18599295s Feb 11 00:16:05.718: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194371191s Feb 11 00:16:07.803: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 6.279098297s Feb 11 00:16:10.944: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 9.420491109s Feb 11 00:16:12.951: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Pending", Reason="", readiness=false. Elapsed: 11.427888226s Feb 11 00:16:14.961: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.43732475s STEP: Saw pod success Feb 11 00:16:14.961: INFO: Pod "downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530" satisfied condition "success or failure" Feb 11 00:16:14.965: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530 container client-container: STEP: delete the pod Feb 11 00:16:15.015: INFO: Waiting for pod downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530 to disappear Feb 11 00:16:15.093: INFO: Pod downwardapi-volume-e53ea78d-e32e-4036-b990-74546ce61530 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:15.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5666" for this suite. • [SLOW TEST:13.689 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":90,"skipped":1315,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:15.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6876" for this suite. • [SLOW TEST:11.438 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":91,"skipped":1319,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:26.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:16:26.622: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:35.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2515" for this suite. • [SLOW TEST:8.478 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1325,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:35.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 11 00:16:42.184: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-288" for this suite. • [SLOW TEST:7.199 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":93,"skipped":1327,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:42.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Feb 11 00:16:42.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9431' Feb 11 00:16:42.999: INFO: stderr: "" Feb 11 00:16:42.999: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 11 00:16:44.010: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:44.010: INFO: Found 0 / 1 Feb 11 00:16:45.005: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:45.005: INFO: Found 0 / 1 Feb 11 00:16:46.005: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:46.005: INFO: Found 0 / 1 Feb 11 00:16:47.072: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:47.072: INFO: Found 0 / 1 Feb 11 00:16:48.007: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:48.008: INFO: Found 0 / 1 Feb 11 00:16:49.007: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:49.007: INFO: Found 0 / 1 Feb 11 00:16:50.009: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:50.009: INFO: Found 1 / 1 Feb 11 00:16:50.009: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 11 00:16:50.015: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:50.015: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 11 00:16:50.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-wg58v --namespace=kubectl-9431 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 11 00:16:50.200: INFO: stderr: "" Feb 11 00:16:50.200: INFO: stdout: "pod/agnhost-master-wg58v patched\n" STEP: checking annotations Feb 11 00:16:50.206: INFO: Selector matched 1 pods for map[app:agnhost] Feb 11 00:16:50.206: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:50.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9431" for this suite. • [SLOW TEST:7.994 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":280,"completed":94,"skipped":1335,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:50.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:16:50.372: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3" in namespace "security-context-test-2943" to be "success or failure" Feb 11 00:16:50.378: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.891189ms Feb 11 00:16:52.383: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010849557s Feb 11 00:16:54.390: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017186777s Feb 11 00:16:56.396: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023315498s Feb 11 00:16:58.404: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031373476s Feb 11 00:16:58.404: INFO: Pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3" satisfied condition "success or failure" Feb 11 00:16:58.422: INFO: Got logs for pod "busybox-privileged-false-1146bdc5-acec-4f35-ab82-ff4978f480e3": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:16:58.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2943" for this suite. • [SLOW TEST:8.219 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":95,"skipped":1374,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:16:58.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:17:58.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5813" for this suite. • [SLOW TEST:60.167 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:17:58.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-ae93595e-fd39-4dea-8241-259a84ba4139 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:17:58.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4680" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":97,"skipped":1447,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:17:58.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:17:58.868: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 11 00:18:02.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1775 create -f -' Feb 11 00:18:05.450: INFO: stderr: "" Feb 11 00:18:05.450: INFO: stdout: "e2e-test-crd-publish-openapi-9040-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 11 00:18:05.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1775 delete e2e-test-crd-publish-openapi-9040-crds test-cr' Feb 11 00:18:05.653: INFO: stderr: "" Feb 11 00:18:05.653: INFO: stdout: "e2e-test-crd-publish-openapi-9040-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 11 00:18:05.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1775 apply -f -' Feb 11 00:18:06.180: INFO: stderr: "" Feb 11 00:18:06.180: INFO: stdout: "e2e-test-crd-publish-openapi-9040-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 11 00:18:06.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1775 delete e2e-test-crd-publish-openapi-9040-crds test-cr' Feb 11 00:18:06.323: INFO: stderr: "" Feb 11 00:18:06.323: INFO: stdout: "e2e-test-crd-publish-openapi-9040-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 11 00:18:06.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9040-crds' Feb 11 00:18:06.925: INFO: stderr: "" Feb 11 00:18:06.925: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9040-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:18:10.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1775" for this suite. • [SLOW TEST:11.659 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":98,"skipped":1449,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:18:10.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:18:10.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:18:11.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5056" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":280,"completed":99,"skipped":1449,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:18:11.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-projected-all-test-volume-07e8f014-727b-4968-bd7f-17f26372f8d6 STEP: Creating secret with name secret-projected-all-test-volume-e12986ac-57fc-45cb-9610-8ae200f7070c STEP: Creating a pod to test Check all projections for projected volume plugin Feb 11 00:18:11.811: INFO: Waiting up to 5m0s for pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9" in namespace "projected-6547" to be "success or failure" Feb 11 00:18:11.826: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.316788ms Feb 11 00:18:13.838: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026023132s Feb 11 00:18:15.886: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074066209s Feb 11 00:18:17.893: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081041927s Feb 11 00:18:19.900: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088343501s STEP: Saw pod success Feb 11 00:18:19.900: INFO: Pod "projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9" satisfied condition "success or failure" Feb 11 00:18:19.904: INFO: Trying to get logs from node jerma-node pod projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9 container projected-all-volume-test: STEP: delete the pod Feb 11 00:18:20.018: INFO: Waiting for pod projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9 to disappear Feb 11 00:18:20.027: INFO: Pod projected-volume-746ff5c3-ad12-4518-a222-7742c6cbe7e9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:18:20.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6547" for this suite. • [SLOW TEST:8.440 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":100,"skipped":1455,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:18:20.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:18:20.116: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 11 00:18:22.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1540 create -f -' Feb 11 00:18:28.346: INFO: stderr: "" Feb 11 00:18:28.346: INFO: stdout: "e2e-test-crd-publish-openapi-7629-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 11 00:18:28.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1540 delete e2e-test-crd-publish-openapi-7629-crds test-cr' Feb 11 00:18:28.478: INFO: stderr: "" Feb 11 00:18:28.478: INFO: stdout: "e2e-test-crd-publish-openapi-7629-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 11 00:18:28.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1540 apply -f -' Feb 11 00:18:28.904: INFO: stderr: "" Feb 11 00:18:28.905: INFO: stdout: "e2e-test-crd-publish-openapi-7629-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 11 00:18:28.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1540 delete e2e-test-crd-publish-openapi-7629-crds test-cr' Feb 11 00:18:29.064: INFO: stderr: "" Feb 11 00:18:29.064: INFO: stdout: "e2e-test-crd-publish-openapi-7629-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 11 00:18:29.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7629-crds' Feb 11 00:18:29.453: INFO: stderr: "" Feb 11 00:18:29.453: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7629-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:18:33.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1540" for this suite. • [SLOW TEST:13.006 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":101,"skipped":1456,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:18:33.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 11 00:18:33.144: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 11 00:18:33.183: INFO: Waiting for terminating namespaces to be deleted... Feb 11 00:18:33.186: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 11 00:18:33.193: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.193: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 00:18:33.193: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 11 00:18:33.193: INFO: Container weave ready: true, restart count 1 Feb 11 00:18:33.193: INFO: Container weave-npc ready: true, restart count 0 Feb 11 00:18:33.193: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 11 00:18:33.210: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container kube-apiserver ready: true, restart count 1 Feb 11 00:18:33.210: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container etcd ready: true, restart count 1 Feb 11 00:18:33.210: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container coredns ready: true, restart count 0 Feb 11 00:18:33.210: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container coredns ready: true, restart count 0 Feb 11 00:18:33.210: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container kube-controller-manager ready: true, restart count 5 Feb 11 00:18:33.210: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container kube-proxy ready: true, restart count 0 Feb 11 00:18:33.210: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 11 00:18:33.210: INFO: Container weave ready: true, restart count 0 Feb 11 00:18:33.210: INFO: Container weave-npc ready: true, restart count 0 Feb 11 00:18:33.210: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 11 00:18:33.210: INFO: Container kube-scheduler ready: true, restart count 7 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8850c3aa-337a-42e8-ac55-530c04abe83f 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-8850c3aa-337a-42e8-ac55-530c04abe83f off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-8850c3aa-337a-42e8-ac55-530c04abe83f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:23:49.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8075" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:316.731 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":102,"skipped":1461,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:23:49.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:23:49.988: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 11 00:23:50.012: INFO: Number of nodes with available pods: 0 Feb 11 00:23:50.012: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:51.191: INFO: Number of nodes with available pods: 0 Feb 11 00:23:51.192: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:52.639: INFO: Number of nodes with available pods: 0 Feb 11 00:23:52.639: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:53.123: INFO: Number of nodes with available pods: 0 Feb 11 00:23:53.123: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:54.029: INFO: Number of nodes with available pods: 0 Feb 11 00:23:54.029: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:55.964: INFO: Number of nodes with available pods: 0 Feb 11 00:23:55.965: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:56.355: INFO: Number of nodes with available pods: 0 Feb 11 00:23:56.355: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:57.236: INFO: Number of nodes with available pods: 0 Feb 11 00:23:57.236: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:23:58.023: INFO: Number of nodes with available pods: 1 Feb 11 00:23:58.023: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 11 00:23:59.020: INFO: Number of nodes with available pods: 1 Feb 11 00:23:59.020: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 11 00:24:00.019: INFO: Number of nodes with available pods: 2 Feb 11 00:24:00.019: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 11 00:24:00.103: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:00.103: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:01.120: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:01.120: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:02.220: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:02.220: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:03.115: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:03.115: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:04.117: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:04.117: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:05.122: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:05.122: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:06.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:06.118: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:06.118: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:07.121: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:07.121: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:07.121: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:08.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:08.118: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:08.118: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:09.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:09.118: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:09.118: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:10.120: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:10.120: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:10.120: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:11.117: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:11.117: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:11.117: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:12.119: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:12.119: INFO: Wrong image for pod: daemon-set-pk7n2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:12.119: INFO: Pod daemon-set-pk7n2 is not available Feb 11 00:24:13.140: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:13.140: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:14.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:14.118: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:18.256: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:18.256: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:19.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:19.118: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:20.118: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:20.118: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:21.183: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:21.183: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:22.119: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:22.120: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:23.119: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:23.119: INFO: Pod daemon-set-mblht is not available Feb 11 00:24:24.121: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:25.117: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:26.163: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:27.121: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:28.119: INFO: Wrong image for pod: daemon-set-4zx8j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 11 00:24:28.119: INFO: Pod daemon-set-4zx8j is not available Feb 11 00:24:29.128: INFO: Pod daemon-set-rkdh8 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 11 00:24:29.169: INFO: Number of nodes with available pods: 1 Feb 11 00:24:29.169: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:30.209: INFO: Number of nodes with available pods: 1 Feb 11 00:24:30.209: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:31.182: INFO: Number of nodes with available pods: 1 Feb 11 00:24:31.182: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:32.185: INFO: Number of nodes with available pods: 1 Feb 11 00:24:32.185: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:33.326: INFO: Number of nodes with available pods: 1 Feb 11 00:24:33.326: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:34.189: INFO: Number of nodes with available pods: 1 Feb 11 00:24:34.190: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:35.185: INFO: Number of nodes with available pods: 1 Feb 11 00:24:35.185: INFO: Node jerma-node is running more than one daemon pod Feb 11 00:24:36.323: INFO: Number of nodes with available pods: 2 Feb 11 00:24:36.324: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1138, will wait for the garbage collector to delete the pods Feb 11 00:24:36.507: INFO: Deleting DaemonSet.extensions daemon-set took: 16.685917ms Feb 11 00:24:36.808: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.722081ms Feb 11 00:24:53.160: INFO: Number of nodes with available pods: 0 Feb 11 00:24:53.160: INFO: Number of running nodes: 0, number of available pods: 0 Feb 11 00:24:53.185: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1138/daemonsets","resourceVersion":"7639142"},"items":null} Feb 11 00:24:53.211: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1138/pods","resourceVersion":"7639143"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:24:53.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1138" for this suite. • [SLOW TEST:63.426 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":103,"skipped":1471,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:24:53.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 11 00:24:53.766: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 11 00:24:55.790: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:24:57.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 11 00:24:59.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977493, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 11 00:25:02.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:25:02.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7133" for this suite. STEP: Destroying namespace "webhook-7133-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.945 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":104,"skipped":1477,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:25:03.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-7823 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7823 to expose endpoints map[] Feb 11 00:25:03.325: INFO: successfully validated that service multi-endpoint-test in namespace services-7823 exposes endpoints map[] (10.900754ms elapsed) STEP: Creating pod pod1 in namespace services-7823 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7823 to expose endpoints map[pod1:[100]] Feb 11 00:25:07.614: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.278991018s elapsed, will retry) Feb 11 00:25:13.024: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.688724983s elapsed, will retry) Feb 11 00:25:14.035: INFO: successfully validated that service multi-endpoint-test in namespace services-7823 exposes endpoints map[pod1:[100]] (10.699972516s elapsed) STEP: Creating pod pod2 in namespace services-7823 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7823 to expose endpoints map[pod1:[100] pod2:[101]] Feb 11 00:25:19.002: INFO: Unexpected endpoints: found map[11a68fcf-81f1-4029-913d-61339476263d:[100]], expected map[pod1:[100] pod2:[101]] (4.960849957s elapsed, will retry) Feb 11 00:25:22.056: INFO: successfully validated that service multi-endpoint-test in namespace services-7823 exposes endpoints map[pod1:[100] pod2:[101]] (8.015145207s elapsed) STEP: Deleting pod pod1 in namespace services-7823 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7823 to expose endpoints map[pod2:[101]] Feb 11 00:25:22.135: INFO: successfully validated that service multi-endpoint-test in namespace services-7823 exposes endpoints map[pod2:[101]] (62.217817ms elapsed) STEP: Deleting pod pod2 in namespace services-7823 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7823 to expose endpoints map[] Feb 11 00:25:23.216: INFO: successfully validated that service multi-endpoint-test in namespace services-7823 exposes endpoints map[] (1.067042502s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:25:23.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7823" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.218 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":105,"skipped":1488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:25:23.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 11 00:25:24.128: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:25:38.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-467" for this suite. • [SLOW TEST:15.190 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":106,"skipped":1512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:25:38.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 11 00:25:38.857: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639415 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 11 00:25:38.858: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639417 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 11 00:25:38.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639419 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 11 00:25:48.996: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639455 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 11 00:25:48.997: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639456 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 11 00:25:48.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-925 /api/v1/namespaces/watch-925/configmaps/e2e-watch-test-label-changed f797a065-42e0-4db8-8168-908565c375da 7639457 0 2020-02-11 00:25:38 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:25:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-925" for this suite. • [SLOW TEST:10.418 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":107,"skipped":1555,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:25:49.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:25:49.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 11 00:25:49.358: INFO: stderr: "" Feb 11 00:25:49.358: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 11 00:25:49.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-183" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":280,"completed":108,"skipped":1555,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 11 00:25:49.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 11 00:25:49.538: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/:
alternatives.log
alternatives.l... (200; 50.070815ms)
Feb 11 00:25:49.543: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.870056ms)
Feb 11 00:25:49.593: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.321098ms)
Feb 11 00:25:49.599: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.805982ms)
Feb 11 00:25:49.604: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.994298ms)
Feb 11 00:25:49.608: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.191369ms)
Feb 11 00:25:49.613: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.381255ms)
Feb 11 00:25:49.616: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.852686ms)
Feb 11 00:25:49.621: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.459873ms)
Feb 11 00:25:49.625: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.745814ms)
Feb 11 00:25:49.629: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.306077ms)
Feb 11 00:25:49.635: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.209783ms)
Feb 11 00:25:49.639: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.727645ms)
Feb 11 00:25:49.643: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.624436ms)
Feb 11 00:25:49.646: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.14784ms)
Feb 11 00:25:49.649: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.098624ms)
Feb 11 00:25:49.652: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.385316ms)
Feb 11 00:25:49.656: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.430663ms)
Feb 11 00:25:49.659: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.215958ms)
Feb 11 00:25:49.663: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.66919ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:25:49.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4271" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":109,"skipped":1574,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:25:49.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:26:05.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6898" for this suite.

• [SLOW TEST:16.272 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":110,"skipped":1574,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:26:05.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3757
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3757
I0211 00:26:06.161875       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3757, replica count: 2
I0211 00:26:09.213014       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:26:12.213378       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:26:15.213698       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:26:18.214083       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 00:26:18.214: INFO: Creating new exec pod
Feb 11 00:26:27.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3757 execpodmm4qq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 11 00:26:27.749: INFO: stderr: "I0211 00:26:27.532692    1357 log.go:172] (0xc0001002c0) (0xc00059c780) Create stream\nI0211 00:26:27.532859    1357 log.go:172] (0xc0001002c0) (0xc00059c780) Stream added, broadcasting: 1\nI0211 00:26:27.536877    1357 log.go:172] (0xc0001002c0) Reply frame received for 1\nI0211 00:26:27.536923    1357 log.go:172] (0xc0001002c0) (0xc000791400) Create stream\nI0211 00:26:27.536936    1357 log.go:172] (0xc0001002c0) (0xc000791400) Stream added, broadcasting: 3\nI0211 00:26:27.538750    1357 log.go:172] (0xc0001002c0) Reply frame received for 3\nI0211 00:26:27.538848    1357 log.go:172] (0xc0001002c0) (0xc0007914a0) Create stream\nI0211 00:26:27.538858    1357 log.go:172] (0xc0001002c0) (0xc0007914a0) Stream added, broadcasting: 5\nI0211 00:26:27.540849    1357 log.go:172] (0xc0001002c0) Reply frame received for 5\nI0211 00:26:27.636827    1357 log.go:172] (0xc0001002c0) Data frame received for 5\nI0211 00:26:27.636922    1357 log.go:172] (0xc0007914a0) (5) Data frame handling\nI0211 00:26:27.636965    1357 log.go:172] (0xc0007914a0) (5) Data frame sent\nI0211 00:26:27.636975    1357 log.go:172] (0xc0001002c0) Data frame received for 5\nI0211 00:26:27.636983    1357 log.go:172] (0xc0007914a0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0211 00:26:27.637055    1357 log.go:172] (0xc0007914a0) (5) Data frame sent\nI0211 00:26:27.646679    1357 log.go:172] (0xc0001002c0) Data frame received for 5\nI0211 00:26:27.646717    1357 log.go:172] (0xc0007914a0) (5) Data frame handling\nI0211 00:26:27.646738    1357 log.go:172] (0xc0007914a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0211 00:26:27.734722    1357 log.go:172] (0xc0001002c0) Data frame received for 1\nI0211 00:26:27.734790    1357 log.go:172] (0xc0001002c0) (0xc000791400) Stream removed, broadcasting: 3\nI0211 00:26:27.734834    1357 log.go:172] (0xc00059c780) (1) Data frame handling\nI0211 00:26:27.734858    1357 log.go:172] (0xc00059c780) (1) Data frame sent\nI0211 00:26:27.734871    1357 log.go:172] (0xc0001002c0) (0xc00059c780) Stream removed, broadcasting: 1\nI0211 00:26:27.734956    1357 log.go:172] (0xc0001002c0) (0xc0007914a0) Stream removed, broadcasting: 5\nI0211 00:26:27.735034    1357 log.go:172] (0xc0001002c0) Go away received\nI0211 00:26:27.735794    1357 log.go:172] (0xc0001002c0) (0xc00059c780) Stream removed, broadcasting: 1\nI0211 00:26:27.735815    1357 log.go:172] (0xc0001002c0) (0xc000791400) Stream removed, broadcasting: 3\nI0211 00:26:27.735828    1357 log.go:172] (0xc0001002c0) (0xc0007914a0) Stream removed, broadcasting: 5\n"
Feb 11 00:26:27.749: INFO: stdout: ""
Feb 11 00:26:27.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3757 execpodmm4qq -- /bin/sh -x -c nc -zv -t -w 2 10.96.242.193 80'
Feb 11 00:26:28.241: INFO: stderr: "I0211 00:26:28.052997    1378 log.go:172] (0xc000b722c0) (0xc000be80a0) Create stream\nI0211 00:26:28.053188    1378 log.go:172] (0xc000b722c0) (0xc000be80a0) Stream added, broadcasting: 1\nI0211 00:26:28.059913    1378 log.go:172] (0xc000b722c0) Reply frame received for 1\nI0211 00:26:28.059988    1378 log.go:172] (0xc000b722c0) (0xc000b68140) Create stream\nI0211 00:26:28.060002    1378 log.go:172] (0xc000b722c0) (0xc000b68140) Stream added, broadcasting: 3\nI0211 00:26:28.061894    1378 log.go:172] (0xc000b722c0) Reply frame received for 3\nI0211 00:26:28.061936    1378 log.go:172] (0xc000b722c0) (0xc000ae60a0) Create stream\nI0211 00:26:28.061952    1378 log.go:172] (0xc000b722c0) (0xc000ae60a0) Stream added, broadcasting: 5\nI0211 00:26:28.063748    1378 log.go:172] (0xc000b722c0) Reply frame received for 5\nI0211 00:26:28.142744    1378 log.go:172] (0xc000b722c0) Data frame received for 5\nI0211 00:26:28.143214    1378 log.go:172] (0xc000ae60a0) (5) Data frame handling\nI0211 00:26:28.143310    1378 log.go:172] (0xc000ae60a0) (5) Data frame sent\nI0211 00:26:28.143756    1378 log.go:172] (0xc000b722c0) Data frame received for 5\nI0211 00:26:28.143784    1378 log.go:172] (0xc000ae60a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.242.193 80\nConnection to 10.96.242.193 80 port [tcp/http] succeeded!\nI0211 00:26:28.143870    1378 log.go:172] (0xc000ae60a0) (5) Data frame sent\nI0211 00:26:28.222932    1378 log.go:172] (0xc000b722c0) Data frame received for 1\nI0211 00:26:28.223050    1378 log.go:172] (0xc000b722c0) (0xc000ae60a0) Stream removed, broadcasting: 5\nI0211 00:26:28.223105    1378 log.go:172] (0xc000be80a0) (1) Data frame handling\nI0211 00:26:28.223126    1378 log.go:172] (0xc000be80a0) (1) Data frame sent\nI0211 00:26:28.223159    1378 log.go:172] (0xc000b722c0) (0xc000b68140) Stream removed, broadcasting: 3\nI0211 00:26:28.223211    1378 log.go:172] (0xc000b722c0) (0xc000be80a0) Stream removed, broadcasting: 1\nI0211 00:26:28.223228    1378 log.go:172] (0xc000b722c0) Go away received\nI0211 00:26:28.224433    1378 log.go:172] (0xc000b722c0) (0xc000be80a0) Stream removed, broadcasting: 1\nI0211 00:26:28.224447    1378 log.go:172] (0xc000b722c0) (0xc000b68140) Stream removed, broadcasting: 3\nI0211 00:26:28.224454    1378 log.go:172] (0xc000b722c0) (0xc000ae60a0) Stream removed, broadcasting: 5\n"
Feb 11 00:26:28.241: INFO: stdout: ""
Feb 11 00:26:28.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3757 execpodmm4qq -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32355'
Feb 11 00:26:28.609: INFO: stderr: "I0211 00:26:28.381307    1399 log.go:172] (0xc0007a8630) (0xc0007ba0a0) Create stream\nI0211 00:26:28.381488    1399 log.go:172] (0xc0007a8630) (0xc0007ba0a0) Stream added, broadcasting: 1\nI0211 00:26:28.385309    1399 log.go:172] (0xc0007a8630) Reply frame received for 1\nI0211 00:26:28.385345    1399 log.go:172] (0xc0007a8630) (0xc0006f5c20) Create stream\nI0211 00:26:28.385352    1399 log.go:172] (0xc0007a8630) (0xc0006f5c20) Stream added, broadcasting: 3\nI0211 00:26:28.386424    1399 log.go:172] (0xc0007a8630) Reply frame received for 3\nI0211 00:26:28.386491    1399 log.go:172] (0xc0007a8630) (0xc0006f5e00) Create stream\nI0211 00:26:28.386512    1399 log.go:172] (0xc0007a8630) (0xc0006f5e00) Stream added, broadcasting: 5\nI0211 00:26:28.390266    1399 log.go:172] (0xc0007a8630) Reply frame received for 5\nI0211 00:26:28.465287    1399 log.go:172] (0xc0007a8630) Data frame received for 5\nI0211 00:26:28.465410    1399 log.go:172] (0xc0006f5e00) (5) Data frame handling\nI0211 00:26:28.465462    1399 log.go:172] (0xc0006f5e00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32355\nI0211 00:26:28.466840    1399 log.go:172] (0xc0007a8630) Data frame received for 5\nI0211 00:26:28.466863    1399 log.go:172] (0xc0006f5e00) (5) Data frame handling\nI0211 00:26:28.466882    1399 log.go:172] (0xc0006f5e00) (5) Data frame sent\nConnection to 10.96.2.250 32355 port [tcp/32355] succeeded!\nI0211 00:26:28.596219    1399 log.go:172] (0xc0007a8630) (0xc0006f5c20) Stream removed, broadcasting: 3\nI0211 00:26:28.596363    1399 log.go:172] (0xc0007a8630) Data frame received for 1\nI0211 00:26:28.596387    1399 log.go:172] (0xc0007ba0a0) (1) Data frame handling\nI0211 00:26:28.596444    1399 log.go:172] (0xc0007ba0a0) (1) Data frame sent\nI0211 00:26:28.596462    1399 log.go:172] (0xc0007a8630) (0xc0007ba0a0) Stream removed, broadcasting: 1\nI0211 00:26:28.597947    1399 log.go:172] (0xc0007a8630) (0xc0006f5e00) Stream removed, broadcasting: 5\nI0211 00:26:28.598046    1399 log.go:172] (0xc0007a8630) (0xc0007ba0a0) Stream removed, broadcasting: 1\nI0211 00:26:28.598075    1399 log.go:172] (0xc0007a8630) (0xc0006f5c20) Stream removed, broadcasting: 3\nI0211 00:26:28.598140    1399 log.go:172] (0xc0007a8630) Go away received\nI0211 00:26:28.598321    1399 log.go:172] (0xc0007a8630) (0xc0006f5e00) Stream removed, broadcasting: 5\n"
Feb 11 00:26:28.610: INFO: stdout: ""
Feb 11 00:26:28.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3757 execpodmm4qq -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32355'
Feb 11 00:26:28.884: INFO: stderr: "I0211 00:26:28.738541    1420 log.go:172] (0xc0009c0000) (0xc00068a780) Create stream\nI0211 00:26:28.738740    1420 log.go:172] (0xc0009c0000) (0xc00068a780) Stream added, broadcasting: 1\nI0211 00:26:28.742645    1420 log.go:172] (0xc0009c0000) Reply frame received for 1\nI0211 00:26:28.742731    1420 log.go:172] (0xc0009c0000) (0xc0004cf400) Create stream\nI0211 00:26:28.742740    1420 log.go:172] (0xc0009c0000) (0xc0004cf400) Stream added, broadcasting: 3\nI0211 00:26:28.743839    1420 log.go:172] (0xc0009c0000) Reply frame received for 3\nI0211 00:26:28.743857    1420 log.go:172] (0xc0009c0000) (0xc0004cf4a0) Create stream\nI0211 00:26:28.743863    1420 log.go:172] (0xc0009c0000) (0xc0004cf4a0) Stream added, broadcasting: 5\nI0211 00:26:28.744706    1420 log.go:172] (0xc0009c0000) Reply frame received for 5\nI0211 00:26:28.800607    1420 log.go:172] (0xc0009c0000) Data frame received for 5\nI0211 00:26:28.800653    1420 log.go:172] (0xc0004cf4a0) (5) Data frame handling\nI0211 00:26:28.800672    1420 log.go:172] (0xc0004cf4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32355\nI0211 00:26:28.806829    1420 log.go:172] (0xc0009c0000) Data frame received for 5\nI0211 00:26:28.806842    1420 log.go:172] (0xc0004cf4a0) (5) Data frame handling\nI0211 00:26:28.806850    1420 log.go:172] (0xc0004cf4a0) (5) Data frame sent\nConnection to 10.96.1.234 32355 port [tcp/32355] succeeded!\nI0211 00:26:28.876594    1420 log.go:172] (0xc0009c0000) (0xc0004cf400) Stream removed, broadcasting: 3\nI0211 00:26:28.876852    1420 log.go:172] (0xc0009c0000) Data frame received for 1\nI0211 00:26:28.877003    1420 log.go:172] (0xc0009c0000) (0xc0004cf4a0) Stream removed, broadcasting: 5\nI0211 00:26:28.877067    1420 log.go:172] (0xc00068a780) (1) Data frame handling\nI0211 00:26:28.877089    1420 log.go:172] (0xc00068a780) (1) Data frame sent\nI0211 00:26:28.877115    1420 log.go:172] (0xc0009c0000) (0xc00068a780) Stream removed, broadcasting: 1\nI0211 00:26:28.877132    1420 log.go:172] (0xc0009c0000) Go away received\nI0211 00:26:28.877677    1420 log.go:172] (0xc0009c0000) (0xc00068a780) Stream removed, broadcasting: 1\nI0211 00:26:28.877689    1420 log.go:172] (0xc0009c0000) (0xc0004cf400) Stream removed, broadcasting: 3\nI0211 00:26:28.877693    1420 log.go:172] (0xc0009c0000) (0xc0004cf4a0) Stream removed, broadcasting: 5\n"
Feb 11 00:26:28.884: INFO: stdout: ""
Feb 11 00:26:28.884: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:26:28.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3757" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:23.025 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":111,"skipped":1593,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:26:28.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:26:29.107: INFO: Create a RollingUpdate DaemonSet
Feb 11 00:26:29.125: INFO: Check that daemon pods launch on every node of the cluster
Feb 11 00:26:29.237: INFO: Number of nodes with available pods: 0
Feb 11 00:26:29.237: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:30.807: INFO: Number of nodes with available pods: 0
Feb 11 00:26:30.807: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:31.424: INFO: Number of nodes with available pods: 0
Feb 11 00:26:31.424: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:32.269: INFO: Number of nodes with available pods: 0
Feb 11 00:26:32.269: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:33.265: INFO: Number of nodes with available pods: 0
Feb 11 00:26:33.265: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:34.296: INFO: Number of nodes with available pods: 0
Feb 11 00:26:34.297: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:36.145: INFO: Number of nodes with available pods: 0
Feb 11 00:26:36.146: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:36.396: INFO: Number of nodes with available pods: 0
Feb 11 00:26:36.396: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:37.513: INFO: Number of nodes with available pods: 0
Feb 11 00:26:37.513: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:38.274: INFO: Number of nodes with available pods: 0
Feb 11 00:26:38.274: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:39.248: INFO: Number of nodes with available pods: 0
Feb 11 00:26:39.248: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:40.259: INFO: Number of nodes with available pods: 1
Feb 11 00:26:40.259: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:43.238: INFO: Number of nodes with available pods: 1
Feb 11 00:26:43.238: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:26:44.250: INFO: Number of nodes with available pods: 2
Feb 11 00:26:44.250: INFO: Number of running nodes: 2, number of available pods: 2
Feb 11 00:26:44.250: INFO: Update the DaemonSet to trigger a rollout
Feb 11 00:26:44.258: INFO: Updating DaemonSet daemon-set
Feb 11 00:27:03.316: INFO: Roll back the DaemonSet before rollout is complete
Feb 11 00:27:03.327: INFO: Updating DaemonSet daemon-set
Feb 11 00:27:03.327: INFO: Make sure DaemonSet rollback is complete
Feb 11 00:27:03.343: INFO: Wrong image for pod: daemon-set-qgqhm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 11 00:27:03.343: INFO: Pod daemon-set-qgqhm is not available
Feb 11 00:27:04.571: INFO: Wrong image for pod: daemon-set-qgqhm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 11 00:27:04.571: INFO: Pod daemon-set-qgqhm is not available
Feb 11 00:27:05.568: INFO: Wrong image for pod: daemon-set-qgqhm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 11 00:27:05.568: INFO: Pod daemon-set-qgqhm is not available
Feb 11 00:27:06.572: INFO: Wrong image for pod: daemon-set-qgqhm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 11 00:27:06.572: INFO: Pod daemon-set-qgqhm is not available
Feb 11 00:27:07.570: INFO: Wrong image for pod: daemon-set-qgqhm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 11 00:27:07.570: INFO: Pod daemon-set-qgqhm is not available
Feb 11 00:27:08.573: INFO: Pod daemon-set-kdghr is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5324, will wait for the garbage collector to delete the pods
Feb 11 00:27:08.657: INFO: Deleting DaemonSet.extensions daemon-set took: 9.715467ms
Feb 11 00:27:08.958: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.952807ms
Feb 11 00:27:23.178: INFO: Number of nodes with available pods: 0
Feb 11 00:27:23.178: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 00:27:23.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5324/daemonsets","resourceVersion":"7639855"},"items":null}

Feb 11 00:27:23.186: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5324/pods","resourceVersion":"7639855"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:27:23.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5324" for this suite.

• [SLOW TEST:54.246 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":112,"skipped":1626,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:27:23.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-gj5d
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 00:27:23.392: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gj5d" in namespace "subpath-6040" to be "success or failure"
Feb 11 00:27:23.419: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.608574ms
Feb 11 00:27:25.427: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034523365s
Feb 11 00:27:27.434: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041304464s
Feb 11 00:27:29.439: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046463245s
Feb 11 00:27:31.445: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 8.052545451s
Feb 11 00:27:33.454: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 10.061304962s
Feb 11 00:27:35.461: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 12.068382456s
Feb 11 00:27:37.473: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 14.080965219s
Feb 11 00:27:39.481: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 16.088623944s
Feb 11 00:27:42.196: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 18.803987548s
Feb 11 00:27:44.204: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 20.81215123s
Feb 11 00:27:46.210: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 22.818082297s
Feb 11 00:27:48.219: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Running", Reason="", readiness=true. Elapsed: 24.826428092s
Feb 11 00:27:50.226: INFO: Pod "pod-subpath-test-downwardapi-gj5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.833503259s
STEP: Saw pod success
Feb 11 00:27:50.226: INFO: Pod "pod-subpath-test-downwardapi-gj5d" satisfied condition "success or failure"
Feb 11 00:27:50.229: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-gj5d container test-container-subpath-downwardapi-gj5d: 
STEP: delete the pod
Feb 11 00:27:50.318: INFO: Waiting for pod pod-subpath-test-downwardapi-gj5d to disappear
Feb 11 00:27:50.322: INFO: Pod pod-subpath-test-downwardapi-gj5d no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-gj5d
Feb 11 00:27:50.322: INFO: Deleting pod "pod-subpath-test-downwardapi-gj5d" in namespace "subpath-6040"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:27:50.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6040" for this suite.

• [SLOW TEST:27.116 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":113,"skipped":1697,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:27:50.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-1429
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 00:27:50.402: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 11 00:27:50.590: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:27:52.614: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:27:54.635: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:27:57.145: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:27:59.070: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:28:00.601: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:02.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:04.603: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:06.599: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:08.604: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:10.599: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:12.604: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:14.598: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:28:16.601: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 11 00:28:16.611: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 11 00:28:24.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1429 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 00:28:24.687: INFO: >>> kubeConfig: /root/.kube/config
I0211 00:28:24.756392       9 log.go:172] (0xc002dd8420) (0xc001385680) Create stream
I0211 00:28:24.756799       9 log.go:172] (0xc002dd8420) (0xc001385680) Stream added, broadcasting: 1
I0211 00:28:24.762724       9 log.go:172] (0xc002dd8420) Reply frame received for 1
I0211 00:28:24.762806       9 log.go:172] (0xc002dd8420) (0xc0007d7b80) Create stream
I0211 00:28:24.762827       9 log.go:172] (0xc002dd8420) (0xc0007d7b80) Stream added, broadcasting: 3
I0211 00:28:24.764462       9 log.go:172] (0xc002dd8420) Reply frame received for 3
I0211 00:28:24.764502       9 log.go:172] (0xc002dd8420) (0xc0007d7c20) Create stream
I0211 00:28:24.764511       9 log.go:172] (0xc002dd8420) (0xc0007d7c20) Stream added, broadcasting: 5
I0211 00:28:24.766304       9 log.go:172] (0xc002dd8420) Reply frame received for 5
I0211 00:28:24.921359       9 log.go:172] (0xc002dd8420) Data frame received for 3
I0211 00:28:24.921427       9 log.go:172] (0xc0007d7b80) (3) Data frame handling
I0211 00:28:24.921440       9 log.go:172] (0xc0007d7b80) (3) Data frame sent
I0211 00:28:25.010756       9 log.go:172] (0xc002dd8420) Data frame received for 1
I0211 00:28:25.010818       9 log.go:172] (0xc002dd8420) (0xc0007d7c20) Stream removed, broadcasting: 5
I0211 00:28:25.010851       9 log.go:172] (0xc001385680) (1) Data frame handling
I0211 00:28:25.010870       9 log.go:172] (0xc002dd8420) (0xc0007d7b80) Stream removed, broadcasting: 3
I0211 00:28:25.010892       9 log.go:172] (0xc001385680) (1) Data frame sent
I0211 00:28:25.010899       9 log.go:172] (0xc002dd8420) (0xc001385680) Stream removed, broadcasting: 1
I0211 00:28:25.010905       9 log.go:172] (0xc002dd8420) Go away received
I0211 00:28:25.011034       9 log.go:172] (0xc002dd8420) (0xc001385680) Stream removed, broadcasting: 1
I0211 00:28:25.011044       9 log.go:172] (0xc002dd8420) (0xc0007d7b80) Stream removed, broadcasting: 3
I0211 00:28:25.011053       9 log.go:172] (0xc002dd8420) (0xc0007d7c20) Stream removed, broadcasting: 5
Feb 11 00:28:25.011: INFO: Waiting for responses: map[]
Feb 11 00:28:25.015: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1429 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 00:28:25.016: INFO: >>> kubeConfig: /root/.kube/config
I0211 00:28:25.051790       9 log.go:172] (0xc002c30790) (0xc000b03b80) Create stream
I0211 00:28:25.051879       9 log.go:172] (0xc002c30790) (0xc000b03b80) Stream added, broadcasting: 1
I0211 00:28:25.054942       9 log.go:172] (0xc002c30790) Reply frame received for 1
I0211 00:28:25.055002       9 log.go:172] (0xc002c30790) (0xc001385720) Create stream
I0211 00:28:25.055016       9 log.go:172] (0xc002c30790) (0xc001385720) Stream added, broadcasting: 3
I0211 00:28:25.056195       9 log.go:172] (0xc002c30790) Reply frame received for 3
I0211 00:28:25.056214       9 log.go:172] (0xc002c30790) (0xc00031cb40) Create stream
I0211 00:28:25.056220       9 log.go:172] (0xc002c30790) (0xc00031cb40) Stream added, broadcasting: 5
I0211 00:28:25.057706       9 log.go:172] (0xc002c30790) Reply frame received for 5
I0211 00:28:25.136317       9 log.go:172] (0xc002c30790) Data frame received for 3
I0211 00:28:25.136364       9 log.go:172] (0xc001385720) (3) Data frame handling
I0211 00:28:25.136374       9 log.go:172] (0xc001385720) (3) Data frame sent
I0211 00:28:25.191603       9 log.go:172] (0xc002c30790) Data frame received for 1
I0211 00:28:25.191798       9 log.go:172] (0xc002c30790) (0xc00031cb40) Stream removed, broadcasting: 5
I0211 00:28:25.191869       9 log.go:172] (0xc000b03b80) (1) Data frame handling
I0211 00:28:25.191902       9 log.go:172] (0xc000b03b80) (1) Data frame sent
I0211 00:28:25.191962       9 log.go:172] (0xc002c30790) (0xc001385720) Stream removed, broadcasting: 3
I0211 00:28:25.192059       9 log.go:172] (0xc002c30790) (0xc000b03b80) Stream removed, broadcasting: 1
I0211 00:28:25.192107       9 log.go:172] (0xc002c30790) Go away received
I0211 00:28:25.192576       9 log.go:172] (0xc002c30790) (0xc000b03b80) Stream removed, broadcasting: 1
I0211 00:28:25.192587       9 log.go:172] (0xc002c30790) (0xc001385720) Stream removed, broadcasting: 3
I0211 00:28:25.192593       9 log.go:172] (0xc002c30790) (0xc00031cb40) Stream removed, broadcasting: 5
Feb 11 00:28:25.192: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:28:25.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1429" for this suite.

• [SLOW TEST:34.864 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1715,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:28:25.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-b8fa4e36-bf50-4008-94ae-4a84b18fbf0d in namespace container-probe-6258
Feb 11 00:28:37.322: INFO: Started pod liveness-b8fa4e36-bf50-4008-94ae-4a84b18fbf0d in namespace container-probe-6258
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 00:28:37.327: INFO: Initial restart count of pod liveness-b8fa4e36-bf50-4008-94ae-4a84b18fbf0d is 0
Feb 11 00:28:59.429: INFO: Restart count of pod container-probe-6258/liveness-b8fa4e36-bf50-4008-94ae-4a84b18fbf0d is now 1 (22.101938355s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:28:59.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6258" for this suite.

• [SLOW TEST:34.344 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":115,"skipped":1771,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:28:59.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8360
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 11 00:28:59.709: INFO: Found 0 stateful pods, waiting for 3
Feb 11 00:29:09.720: INFO: Found 2 stateful pods, waiting for 3
Feb 11 00:29:19.775: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:29:19.776: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:29:19.776: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 11 00:29:29.718: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:29:29.718: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:29:29.718: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 11 00:29:29.765: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 11 00:29:39.855: INFO: Updating stateful set ss2
Feb 11 00:29:39.947: INFO: Waiting for Pod statefulset-8360/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 11 00:29:50.163: INFO: Found 2 stateful pods, waiting for 3
Feb 11 00:30:00.171: INFO: Found 2 stateful pods, waiting for 3
Feb 11 00:30:10.175: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:30:10.176: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 00:30:10.176: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 11 00:30:10.210: INFO: Updating stateful set ss2
Feb 11 00:30:10.255: INFO: Waiting for Pod statefulset-8360/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 11 00:30:20.532: INFO: Updating stateful set ss2
Feb 11 00:30:23.987: INFO: Waiting for StatefulSet statefulset-8360/ss2 to complete update
Feb 11 00:30:23.987: INFO: Waiting for Pod statefulset-8360/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 11 00:30:34.002: INFO: Waiting for StatefulSet statefulset-8360/ss2 to complete update
Feb 11 00:30:34.002: INFO: Waiting for Pod statefulset-8360/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 11 00:30:43.998: INFO: Waiting for StatefulSet statefulset-8360/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 11 00:30:54.000: INFO: Deleting all statefulset in ns statefulset-8360
Feb 11 00:30:54.007: INFO: Scaling statefulset ss2 to 0
Feb 11 00:31:24.045: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 00:31:24.054: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:31:24.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8360" for this suite.

• [SLOW TEST:144.598 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":116,"skipped":1781,"failed":0}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:31:24.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 11 00:31:24.232: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 11 00:31:24.301: INFO: Waiting for terminating namespaces to be deleted...
Feb 11 00:31:24.303: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 11 00:31:24.323: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.323: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 00:31:24.323: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 11 00:31:24.323: INFO: 	Container weave ready: true, restart count 1
Feb 11 00:31:24.323: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 00:31:24.323: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 11 00:31:24.338: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container kube-controller-manager ready: true, restart count 5
Feb 11 00:31:24.338: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 00:31:24.338: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container weave ready: true, restart count 0
Feb 11 00:31:24.338: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 00:31:24.338: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container kube-scheduler ready: true, restart count 7
Feb 11 00:31:24.338: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 11 00:31:24.338: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container etcd ready: true, restart count 1
Feb 11 00:31:24.338: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container coredns ready: true, restart count 0
Feb 11 00:31:24.338: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 11 00:31:24.338: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-6e2cfce0-18e3-4fda-86b7-34eb8f57642c 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-6e2cfce0-18e3-4fda-86b7-34eb8f57642c off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-6e2cfce0-18e3-4fda-86b7-34eb8f57642c
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:31:58.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1835" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:34.851 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":117,"skipped":1789,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:31:59.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:32:19.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3235" for this suite.

• [SLOW TEST:20.745 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":118,"skipped":1823,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:32:19.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 11 00:32:28.643: INFO: Successfully updated pod "annotationupdatea570ba0b-fa92-45ef-afde-3ab4276f98c8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:32:32.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3557" for this suite.

• [SLOW TEST:12.988 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1825,"failed":0}
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:32:32.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:32:39.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9605" for this suite.
STEP: Destroying namespace "nsdeletetest-7200" for this suite.
Feb 11 00:32:39.181: INFO: Namespace nsdeletetest-7200 was already deleted
STEP: Destroying namespace "nsdeletetest-7818" for this suite.

• [SLOW TEST:6.452 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":120,"skipped":1825,"failed":0}
S
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:32:39.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:32:39.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2971" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":121,"skipped":1826,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:32:39.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 11 00:32:39.720: INFO: Waiting up to 5m0s for pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766" in namespace "emptydir-6910" to be "success or failure"
Feb 11 00:32:39.800: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Pending", Reason="", readiness=false. Elapsed: 79.142223ms
Feb 11 00:32:41.809: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088836269s
Feb 11 00:32:43.826: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105529639s
Feb 11 00:32:45.833: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112858706s
Feb 11 00:32:47.840: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119240955s
Feb 11 00:32:49.847: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126189565s
STEP: Saw pod success
Feb 11 00:32:49.847: INFO: Pod "pod-85c4e4d0-0c14-42de-bb89-58f896719766" satisfied condition "success or failure"
Feb 11 00:32:49.852: INFO: Trying to get logs from node jerma-node pod pod-85c4e4d0-0c14-42de-bb89-58f896719766 container test-container: 
STEP: delete the pod
Feb 11 00:32:49.946: INFO: Waiting for pod pod-85c4e4d0-0c14-42de-bb89-58f896719766 to disappear
Feb 11 00:32:49.958: INFO: Pod pod-85c4e4d0-0c14-42de-bb89-58f896719766 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:32:49.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6910" for this suite.

• [SLOW TEST:10.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":1839,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:32:49.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 11 00:32:50.085: INFO: Waiting up to 5m0s for pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816" in namespace "emptydir-5931" to be "success or failure"
Feb 11 00:32:50.096: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Pending", Reason="", readiness=false. Elapsed: 11.558673ms
Feb 11 00:32:53.730: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Pending", Reason="", readiness=false. Elapsed: 3.645317543s
Feb 11 00:32:55.738: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Pending", Reason="", readiness=false. Elapsed: 5.653410538s
Feb 11 00:32:57.747: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Pending", Reason="", readiness=false. Elapsed: 7.66219206s
Feb 11 00:32:59.754: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Pending", Reason="", readiness=false. Elapsed: 9.668664207s
Feb 11 00:33:01.760: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.67507163s
STEP: Saw pod success
Feb 11 00:33:01.760: INFO: Pod "pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816" satisfied condition "success or failure"
Feb 11 00:33:01.765: INFO: Trying to get logs from node jerma-node pod pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816 container test-container: 
STEP: delete the pod
Feb 11 00:33:01.808: INFO: Waiting for pod pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816 to disappear
Feb 11 00:33:01.830: INFO: Pod pod-5bc46b5c-ae32-4876-8aa6-fac370fcc816 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:01.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5931" for this suite.

• [SLOW TEST:11.873 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":1873,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:01.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 11 00:33:01.982: INFO: Waiting up to 5m0s for pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f" in namespace "emptydir-2120" to be "success or failure"
Feb 11 00:33:02.020: INFO: Pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.188857ms
Feb 11 00:33:04.027: INFO: Pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045100571s
Feb 11 00:33:06.033: INFO: Pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050486329s
Feb 11 00:33:08.043: INFO: Pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060875362s
STEP: Saw pod success
Feb 11 00:33:08.044: INFO: Pod "pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f" satisfied condition "success or failure"
Feb 11 00:33:08.051: INFO: Trying to get logs from node jerma-node pod pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f container test-container: 
STEP: delete the pod
Feb 11 00:33:08.088: INFO: Waiting for pod pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f to disappear
Feb 11 00:33:08.091: INFO: Pod pod-5e8ac9a4-e67c-4dcb-a3c7-6d784fb2d75f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:08.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2120" for this suite.

• [SLOW TEST:6.247 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":124,"skipped":1878,"failed":0}
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:08.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:33:08.930: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 11 00:33:10.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977989, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:33:12.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977989, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:33:14.961: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977989, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716977988, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:33:17.996: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:33:18.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7349" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.311 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":125,"skipped":1878,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:19.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:33:19.557: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:20.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7120" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":126,"skipped":1887,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:20.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2ff1c9ae-7691-4620-84df-26aa905f9a6c
STEP: Creating a pod to test consume configMaps
Feb 11 00:33:21.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129" in namespace "configmap-1373" to be "success or failure"
Feb 11 00:33:21.383: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Pending", Reason="", readiness=false. Elapsed: 141.25476ms
Feb 11 00:33:23.391: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149192723s
Feb 11 00:33:25.401: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159222144s
Feb 11 00:33:27.408: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166388581s
Feb 11 00:33:29.438: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196689575s
Feb 11 00:33:31.466: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224175763s
STEP: Saw pod success
Feb 11 00:33:31.466: INFO: Pod "pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129" satisfied condition "success or failure"
Feb 11 00:33:31.469: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129 container configmap-volume-test: 
STEP: delete the pod
Feb 11 00:33:31.513: INFO: Waiting for pod pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129 to disappear
Feb 11 00:33:31.532: INFO: Pod pod-configmaps-4d7a4bde-5562-4b03-8f8d-3784ee07c129 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1373" for this suite.

• [SLOW TEST:10.659 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":1899,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:31.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 11 00:33:31.750: INFO: Waiting up to 5m0s for pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce" in namespace "emptydir-4706" to be "success or failure"
Feb 11 00:33:31.839: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce": Phase="Pending", Reason="", readiness=false. Elapsed: 89.168523ms
Feb 11 00:33:33.855: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104588056s
Feb 11 00:33:35.865: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11484765s
Feb 11 00:33:37.879: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128943278s
Feb 11 00:33:39.886: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135894785s
STEP: Saw pod success
Feb 11 00:33:39.886: INFO: Pod "pod-141b46da-0573-48d1-a784-4f1c1cc176ce" satisfied condition "success or failure"
Feb 11 00:33:39.898: INFO: Trying to get logs from node jerma-node pod pod-141b46da-0573-48d1-a784-4f1c1cc176ce container test-container: 
STEP: delete the pod
Feb 11 00:33:40.333: INFO: Waiting for pod pod-141b46da-0573-48d1-a784-4f1c1cc176ce to disappear
Feb 11 00:33:40.358: INFO: Pod pod-141b46da-0573-48d1-a784-4f1c1cc176ce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:40.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4706" for this suite.

• [SLOW TEST:8.822 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":1899,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:40.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:33:48.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7388" for this suite.

• [SLOW TEST:8.214 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":129,"skipped":1930,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:33:48.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-jk7h
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 00:33:48.800: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jk7h" in namespace "subpath-8108" to be "success or failure"
Feb 11 00:33:48.816: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 15.357517ms
Feb 11 00:33:50.824: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02329879s
Feb 11 00:33:52.829: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028306991s
Feb 11 00:33:54.835: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034386478s
Feb 11 00:33:56.841: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040744924s
Feb 11 00:33:58.851: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 10.05112856s
Feb 11 00:34:00.859: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 12.058885777s
Feb 11 00:34:02.871: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 14.071055708s
Feb 11 00:34:04.879: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 16.078512231s
Feb 11 00:34:06.891: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 18.09085507s
Feb 11 00:34:08.898: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 20.098017262s
Feb 11 00:34:10.907: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 22.106379031s
Feb 11 00:34:12.915: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 24.114726447s
Feb 11 00:34:14.924: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 26.123428527s
Feb 11 00:34:16.930: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Running", Reason="", readiness=true. Elapsed: 28.129823867s
Feb 11 00:34:18.937: INFO: Pod "pod-subpath-test-secret-jk7h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.136648853s
STEP: Saw pod success
Feb 11 00:34:18.937: INFO: Pod "pod-subpath-test-secret-jk7h" satisfied condition "success or failure"
Feb 11 00:34:18.941: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-jk7h container test-container-subpath-secret-jk7h: 
STEP: delete the pod
Feb 11 00:34:19.097: INFO: Waiting for pod pod-subpath-test-secret-jk7h to disappear
Feb 11 00:34:19.107: INFO: Pod pod-subpath-test-secret-jk7h no longer exists
STEP: Deleting pod pod-subpath-test-secret-jk7h
Feb 11 00:34:19.107: INFO: Deleting pod "pod-subpath-test-secret-jk7h" in namespace "subpath-8108"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:34:19.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8108" for this suite.

• [SLOW TEST:30.539 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":130,"skipped":1944,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:34:19.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:34:24.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7758" for this suite.

• [SLOW TEST:5.098 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":131,"skipped":1975,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:34:24.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4022
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4022
I0211 00:34:25.911853       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4022, replica count: 2
I0211 00:34:28.963105       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:34:31.963696       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:34:34.964828       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 00:34:34.965: INFO: Creating new exec pod
Feb 11 00:34:42.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4022 execpodg6lwq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 11 00:34:44.981: INFO: stderr: "I0211 00:34:44.736969    1439 log.go:172] (0xc0006353f0) (0xc0006d7f40) Create stream\nI0211 00:34:44.737193    1439 log.go:172] (0xc0006353f0) (0xc0006d7f40) Stream added, broadcasting: 1\nI0211 00:34:44.743619    1439 log.go:172] (0xc0006353f0) Reply frame received for 1\nI0211 00:34:44.743787    1439 log.go:172] (0xc0006353f0) (0xc0005fe820) Create stream\nI0211 00:34:44.743831    1439 log.go:172] (0xc0006353f0) (0xc0005fe820) Stream added, broadcasting: 3\nI0211 00:34:44.745458    1439 log.go:172] (0xc0006353f0) Reply frame received for 3\nI0211 00:34:44.745563    1439 log.go:172] (0xc0006353f0) (0xc00073d4a0) Create stream\nI0211 00:34:44.745591    1439 log.go:172] (0xc0006353f0) (0xc00073d4a0) Stream added, broadcasting: 5\nI0211 00:34:44.747253    1439 log.go:172] (0xc0006353f0) Reply frame received for 5\nI0211 00:34:44.838623    1439 log.go:172] (0xc0006353f0) Data frame received for 5\nI0211 00:34:44.838701    1439 log.go:172] (0xc00073d4a0) (5) Data frame handling\nI0211 00:34:44.838721    1439 log.go:172] (0xc00073d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0211 00:34:44.842663    1439 log.go:172] (0xc0006353f0) Data frame received for 5\nI0211 00:34:44.842694    1439 log.go:172] (0xc00073d4a0) (5) Data frame handling\nI0211 00:34:44.842711    1439 log.go:172] (0xc00073d4a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0211 00:34:44.959170    1439 log.go:172] (0xc0006353f0) Data frame received for 1\nI0211 00:34:44.959232    1439 log.go:172] (0xc0006353f0) (0xc00073d4a0) Stream removed, broadcasting: 5\nI0211 00:34:44.959293    1439 log.go:172] (0xc0006d7f40) (1) Data frame handling\nI0211 00:34:44.959311    1439 log.go:172] (0xc0006d7f40) (1) Data frame sent\nI0211 00:34:44.959353    1439 log.go:172] (0xc0006353f0) (0xc0005fe820) Stream removed, broadcasting: 3\nI0211 00:34:44.959387    1439 log.go:172] (0xc0006353f0) (0xc0006d7f40) Stream removed, broadcasting: 1\nI0211 00:34:44.959406    1439 log.go:172] (0xc0006353f0) Go away received\nI0211 00:34:44.960567    1439 log.go:172] (0xc0006353f0) (0xc0006d7f40) Stream removed, broadcasting: 1\nI0211 00:34:44.960581    1439 log.go:172] (0xc0006353f0) (0xc0005fe820) Stream removed, broadcasting: 3\nI0211 00:34:44.960599    1439 log.go:172] (0xc0006353f0) (0xc00073d4a0) Stream removed, broadcasting: 5\n"
Feb 11 00:34:44.982: INFO: stdout: ""
Feb 11 00:34:44.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4022 execpodg6lwq -- /bin/sh -x -c nc -zv -t -w 2 10.96.170.7 80'
Feb 11 00:34:45.485: INFO: stderr: "I0211 00:34:45.227930    1474 log.go:172] (0xc000a2d550) (0xc00093a3c0) Create stream\nI0211 00:34:45.228044    1474 log.go:172] (0xc000a2d550) (0xc00093a3c0) Stream added, broadcasting: 1\nI0211 00:34:45.243176    1474 log.go:172] (0xc000a2d550) Reply frame received for 1\nI0211 00:34:45.243251    1474 log.go:172] (0xc000a2d550) (0xc000588780) Create stream\nI0211 00:34:45.243264    1474 log.go:172] (0xc000a2d550) (0xc000588780) Stream added, broadcasting: 3\nI0211 00:34:45.244615    1474 log.go:172] (0xc000a2d550) Reply frame received for 3\nI0211 00:34:45.244692    1474 log.go:172] (0xc000a2d550) (0xc000717400) Create stream\nI0211 00:34:45.244709    1474 log.go:172] (0xc000a2d550) (0xc000717400) Stream added, broadcasting: 5\nI0211 00:34:45.247074    1474 log.go:172] (0xc000a2d550) Reply frame received for 5\nI0211 00:34:45.333303    1474 log.go:172] (0xc000a2d550) Data frame received for 5\nI0211 00:34:45.333476    1474 log.go:172] (0xc000717400) (5) Data frame handling\nI0211 00:34:45.333523    1474 log.go:172] (0xc000717400) (5) Data frame sent\nI0211 00:34:45.333532    1474 log.go:172] (0xc000a2d550) Data frame received for 5\nI0211 00:34:45.333538    1474 log.go:172] (0xc000717400) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.170.7 80\nConnection to 10.96.170.7 80 port [tcp/http] succeeded!\nI0211 00:34:45.333643    1474 log.go:172] (0xc000717400) (5) Data frame sent\nI0211 00:34:45.473881    1474 log.go:172] (0xc000a2d550) Data frame received for 1\nI0211 00:34:45.474080    1474 log.go:172] (0xc000a2d550) (0xc000588780) Stream removed, broadcasting: 3\nI0211 00:34:45.474334    1474 log.go:172] (0xc00093a3c0) (1) Data frame handling\nI0211 00:34:45.474404    1474 log.go:172] (0xc00093a3c0) (1) Data frame sent\nI0211 00:34:45.474644    1474 log.go:172] (0xc000a2d550) (0xc00093a3c0) Stream removed, broadcasting: 1\nI0211 00:34:45.474801    1474 log.go:172] (0xc000a2d550) (0xc000717400) Stream removed, broadcasting: 5\nI0211 00:34:45.474878    1474 log.go:172] (0xc000a2d550) Go away received\nI0211 00:34:45.475931    1474 log.go:172] (0xc000a2d550) (0xc00093a3c0) Stream removed, broadcasting: 1\nI0211 00:34:45.475963    1474 log.go:172] (0xc000a2d550) (0xc000588780) Stream removed, broadcasting: 3\nI0211 00:34:45.475999    1474 log.go:172] (0xc000a2d550) (0xc000717400) Stream removed, broadcasting: 5\n"
Feb 11 00:34:45.485: INFO: stdout: ""
Feb 11 00:34:45.485: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:34:45.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4022" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:21.369 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":132,"skipped":1995,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:34:45.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 00:34:45.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b" in namespace "projected-8114" to be "success or failure"
Feb 11 00:34:45.765: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.928772ms
Feb 11 00:34:47.772: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029080184s
Feb 11 00:34:49.781: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037495375s
Feb 11 00:34:51.794: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050471002s
Feb 11 00:34:54.396: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.652143888s
Feb 11 00:34:56.491: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.747845165s
STEP: Saw pod success
Feb 11 00:34:56.491: INFO: Pod "downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b" satisfied condition "success or failure"
Feb 11 00:34:56.500: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b container client-container: 
STEP: delete the pod
Feb 11 00:34:56.978: INFO: Waiting for pod downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b to disappear
Feb 11 00:34:56.982: INFO: Pod downwardapi-volume-50f7e2b7-f5ec-45a5-ab54-163ea2a4e81b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:34:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8114" for this suite.

• [SLOW TEST:11.395 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":133,"skipped":1995,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:34:56.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-7fe49eab-3018-4ed0-acf5-258428b203b2
STEP: Creating a pod to test consume configMaps
Feb 11 00:34:57.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf" in namespace "configmap-4643" to be "success or failure"
Feb 11 00:34:57.174: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88582ms
Feb 11 00:34:59.183: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018072824s
Feb 11 00:35:01.265: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099452159s
Feb 11 00:35:03.273: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108304181s
Feb 11 00:35:05.284: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118729644s
Feb 11 00:35:07.296: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.130579914s
STEP: Saw pod success
Feb 11 00:35:07.296: INFO: Pod "pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf" satisfied condition "success or failure"
Feb 11 00:35:07.301: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf container configmap-volume-test: 
STEP: delete the pod
Feb 11 00:35:07.365: INFO: Waiting for pod pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf to disappear
Feb 11 00:35:07.377: INFO: Pod pod-configmaps-0381e3b6-e6a9-4157-880a-ca95b26ef1cf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:35:07.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4643" for this suite.

• [SLOW TEST:10.434 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":134,"skipped":1996,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:35:07.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:35:15.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2182" for this suite.

• [SLOW TEST:8.211 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2087,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:35:15.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-5848
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5848
STEP: creating replication controller externalsvc in namespace services-5848
I0211 00:35:15.908743       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5848, replica count: 2
I0211 00:35:18.959861       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:35:21.960695       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:35:24.962032       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:35:27.963072       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 11 00:35:28.006: INFO: Creating new exec pod
Feb 11 00:35:34.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5848 execpodrpk2l -- /bin/sh -x -c nslookup nodeport-service'
Feb 11 00:35:34.620: INFO: stderr: "I0211 00:35:34.219328    1495 log.go:172] (0xc000a05760) (0xc0009f46e0) Create stream\nI0211 00:35:34.219518    1495 log.go:172] (0xc000a05760) (0xc0009f46e0) Stream added, broadcasting: 1\nI0211 00:35:34.222205    1495 log.go:172] (0xc000a05760) Reply frame received for 1\nI0211 00:35:34.222238    1495 log.go:172] (0xc000a05760) (0xc0008a0000) Create stream\nI0211 00:35:34.222243    1495 log.go:172] (0xc000a05760) (0xc0008a0000) Stream added, broadcasting: 3\nI0211 00:35:34.223556    1495 log.go:172] (0xc000a05760) Reply frame received for 3\nI0211 00:35:34.223578    1495 log.go:172] (0xc000a05760) (0xc000890000) Create stream\nI0211 00:35:34.223589    1495 log.go:172] (0xc000a05760) (0xc000890000) Stream added, broadcasting: 5\nI0211 00:35:34.224567    1495 log.go:172] (0xc000a05760) Reply frame received for 5\nI0211 00:35:34.306063    1495 log.go:172] (0xc000a05760) Data frame received for 5\nI0211 00:35:34.306234    1495 log.go:172] (0xc000890000) (5) Data frame handling\nI0211 00:35:34.306269    1495 log.go:172] (0xc000890000) (5) Data frame sent\n+ nslookup nodeport-service\nI0211 00:35:34.477220    1495 log.go:172] (0xc000a05760) Data frame received for 3\nI0211 00:35:34.477353    1495 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0211 00:35:34.477393    1495 log.go:172] (0xc0008a0000) (3) Data frame sent\nI0211 00:35:34.481731    1495 log.go:172] (0xc000a05760) Data frame received for 3\nI0211 00:35:34.481773    1495 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0211 00:35:34.481793    1495 log.go:172] (0xc0008a0000) (3) Data frame sent\nI0211 00:35:34.607467    1495 log.go:172] (0xc000a05760) Data frame received for 1\nI0211 00:35:34.607522    1495 log.go:172] (0xc0009f46e0) (1) Data frame handling\nI0211 00:35:34.607565    1495 log.go:172] (0xc0009f46e0) (1) Data frame sent\nI0211 00:35:34.607607    1495 log.go:172] (0xc000a05760) (0xc0009f46e0) Stream removed, broadcasting: 1\nI0211 00:35:34.608739    1495 log.go:172] (0xc000a05760) (0xc0008a0000) Stream removed, broadcasting: 3\nI0211 00:35:34.609077    1495 log.go:172] (0xc000a05760) (0xc000890000) Stream removed, broadcasting: 5\nI0211 00:35:34.609111    1495 log.go:172] (0xc000a05760) (0xc0009f46e0) Stream removed, broadcasting: 1\nI0211 00:35:34.609118    1495 log.go:172] (0xc000a05760) (0xc0008a0000) Stream removed, broadcasting: 3\nI0211 00:35:34.609122    1495 log.go:172] (0xc000a05760) (0xc000890000) Stream removed, broadcasting: 5\nI0211 00:35:34.609591    1495 log.go:172] (0xc000a05760) Go away received\n"
Feb 11 00:35:34.621: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5848.svc.cluster.local\tcanonical name = externalsvc.services-5848.svc.cluster.local.\nName:\texternalsvc.services-5848.svc.cluster.local\nAddress: 10.96.0.42\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5848, will wait for the garbage collector to delete the pods
Feb 11 00:35:34.691: INFO: Deleting ReplicationController externalsvc took: 14.27936ms
Feb 11 00:35:35.092: INFO: Terminating ReplicationController externalsvc pods took: 400.492276ms
Feb 11 00:35:53.318: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:35:53.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5848" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:37.793 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":136,"skipped":2096,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:35:53.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-9148
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[]
Feb 11 00:35:53.799: INFO: Get endpoints failed (50.870438ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 11 00:35:54.806: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[] (1.057373325s elapsed)
STEP: Creating pod pod1 in namespace services-9148
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[pod1:[80]]
Feb 11 00:35:59.044: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.210695891s elapsed, will retry)
Feb 11 00:36:03.203: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[pod1:[80]] (8.37062108s elapsed)
STEP: Creating pod pod2 in namespace services-9148
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 11 00:36:07.650: INFO: Unexpected endpoints: found map[f67717f6-30f3-4eae-adcd-df83a2b87947:[80]], expected map[pod1:[80] pod2:[80]] (4.437020281s elapsed, will retry)
Feb 11 00:36:11.119: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[pod1:[80] pod2:[80]] (7.905751562s elapsed)
STEP: Deleting pod pod1 in namespace services-9148
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[pod2:[80]]
Feb 11 00:36:12.198: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[pod2:[80]] (1.068282366s elapsed)
STEP: Deleting pod pod2 in namespace services-9148
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9148 to expose endpoints map[]
Feb 11 00:36:13.739: INFO: successfully validated that service endpoint-test2 in namespace services-9148 exposes endpoints map[] (1.530454675s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:36:14.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9148" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:20.916 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":137,"skipped":2149,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:36:14.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5859.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5859.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5859.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 00:36:28.575: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.581: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.587: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.593: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.611: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.616: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.627: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.634: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:28.642: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:33.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.696: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.779: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.805: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.857: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.878: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.927: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.933: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:33.944: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:38.651: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.654: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.659: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.664: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.674: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.678: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.681: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.684: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:38.689: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:43.659: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.672: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.685: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.701: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.722: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.725: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.728: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.732: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:43.740: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:48.665: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.676: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.684: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.704: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.707: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.710: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:48.720: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5859.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:53.670: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:53.684: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:53.689: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:53.695: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:53.699: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local from pod dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b: the server could not find the requested resource (get pods dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b)
Feb 11 00:36:53.709: INFO: Lookups using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b failed for: [wheezy_tcp@dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5859.svc.cluster.local jessie_udp@dns-test-service-2.dns-5859.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5859.svc.cluster.local]

Feb 11 00:36:58.746: INFO: DNS probes using dns-5859/dns-test-f781a02c-5d49-4b30-8331-e34e1e8a9c7b succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:36:58.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5859" for this suite.

• [SLOW TEST:46.999 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":138,"skipped":2153,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:37:01.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-f1024bb0-3c5a-4d80-8133-f7814be77a63
STEP: Creating a pod to test consume secrets
Feb 11 00:37:01.732: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325" in namespace "projected-7659" to be "success or failure"
Feb 11 00:37:01.747: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Pending", Reason="", readiness=false. Elapsed: 14.440324ms
Feb 11 00:37:03.755: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022426605s
Feb 11 00:37:05.765: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032507666s
Feb 11 00:37:07.898: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166261302s
Feb 11 00:37:09.933: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200631937s
Feb 11 00:37:11.962: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.230234173s
STEP: Saw pod success
Feb 11 00:37:11.962: INFO: Pod "pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325" satisfied condition "success or failure"
Feb 11 00:37:11.966: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 00:37:12.046: INFO: Waiting for pod pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325 to disappear
Feb 11 00:37:12.060: INFO: Pod pod-projected-secrets-1b1fd3e9-880a-40f6-a666-cc2894272325 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:37:12.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7659" for this suite.

• [SLOW TEST:10.713 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":139,"skipped":2166,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:37:12.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 00:37:12.244: INFO: Number of nodes with available pods: 0
Feb 11 00:37:12.245: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:13.911: INFO: Number of nodes with available pods: 0
Feb 11 00:37:13.911: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:14.263: INFO: Number of nodes with available pods: 0
Feb 11 00:37:14.263: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:15.257: INFO: Number of nodes with available pods: 0
Feb 11 00:37:15.257: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:16.273: INFO: Number of nodes with available pods: 0
Feb 11 00:37:16.273: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:18.140: INFO: Number of nodes with available pods: 0
Feb 11 00:37:18.140: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:19.392: INFO: Number of nodes with available pods: 0
Feb 11 00:37:19.392: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:20.558: INFO: Number of nodes with available pods: 0
Feb 11 00:37:20.559: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:21.267: INFO: Number of nodes with available pods: 0
Feb 11 00:37:21.268: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:23.838: INFO: Number of nodes with available pods: 1
Feb 11 00:37:23.838: INFO: Node jerma-node is running more than one daemon pod
Feb 11 00:37:24.262: INFO: Number of nodes with available pods: 2
Feb 11 00:37:24.262: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 11 00:37:24.312: INFO: Number of nodes with available pods: 1
Feb 11 00:37:24.312: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:25.329: INFO: Number of nodes with available pods: 1
Feb 11 00:37:25.329: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:26.931: INFO: Number of nodes with available pods: 1
Feb 11 00:37:26.932: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:27.336: INFO: Number of nodes with available pods: 1
Feb 11 00:37:27.337: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:28.334: INFO: Number of nodes with available pods: 1
Feb 11 00:37:28.334: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:29.374: INFO: Number of nodes with available pods: 1
Feb 11 00:37:29.375: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:30.327: INFO: Number of nodes with available pods: 1
Feb 11 00:37:30.327: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:31.333: INFO: Number of nodes with available pods: 1
Feb 11 00:37:31.333: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:32.326: INFO: Number of nodes with available pods: 1
Feb 11 00:37:32.326: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:33.365: INFO: Number of nodes with available pods: 1
Feb 11 00:37:33.365: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:34.536: INFO: Number of nodes with available pods: 1
Feb 11 00:37:34.536: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:35.325: INFO: Number of nodes with available pods: 1
Feb 11 00:37:35.325: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 00:37:36.331: INFO: Number of nodes with available pods: 2
Feb 11 00:37:36.331: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2215, will wait for the garbage collector to delete the pods
Feb 11 00:37:36.397: INFO: Deleting DaemonSet.extensions daemon-set took: 6.561608ms
Feb 11 00:37:36.897: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.598985ms
Feb 11 00:37:53.202: INFO: Number of nodes with available pods: 0
Feb 11 00:37:53.202: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 00:37:53.205: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2215/daemonsets","resourceVersion":"7642641"},"items":null}

Feb 11 00:37:53.208: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2215/pods","resourceVersion":"7642641"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:37:53.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2215" for this suite.

• [SLOW TEST:41.154 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":140,"skipped":2183,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:37:53.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9711" for this suite.

• [SLOW TEST:8.196 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2250,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:01.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-e57857f8-8f65-4977-a498-4df05bfb69dc
STEP: Creating a pod to test consume configMaps
Feb 11 00:38:01.570: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb" in namespace "projected-9591" to be "success or failure"
Feb 11 00:38:01.597: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.481816ms
Feb 11 00:38:03.609: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038291059s
Feb 11 00:38:05.622: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051772264s
Feb 11 00:38:07.796: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225669996s
Feb 11 00:38:09.860: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290063521s
Feb 11 00:38:11.881: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.310372579s
STEP: Saw pod success
Feb 11 00:38:11.881: INFO: Pod "pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb" satisfied condition "success or failure"
Feb 11 00:38:11.888: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 00:38:12.093: INFO: Waiting for pod pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb to disappear
Feb 11 00:38:12.114: INFO: Pod pod-projected-configmaps-f9a4e340-19d2-40e4-a1fc-9e1a5e0b73cb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:12.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9591" for this suite.

• [SLOW TEST:10.694 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":142,"skipped":2315,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:12.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:23.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2186" for this suite.

• [SLOW TEST:11.228 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":143,"skipped":2353,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:23.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:38:24.192: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 00:38:26.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:38:28.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:38:30.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978304, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:38:33.543: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:33.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5302" for this suite.
STEP: Destroying namespace "webhook-5302-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.578 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":144,"skipped":2364,"failed":0}
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:33.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0211 00:38:36.900586       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 00:38:36.900: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:36.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5085" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":145,"skipped":2364,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:37.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 11 00:38:38.195: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3762 /api/v1/namespaces/watch-3762/configmaps/e2e-watch-test-resource-version 44169cf0-db01-49f5-afe4-2b59d6201ba8 7642913 0 2020-02-11 00:38:37 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 11 00:38:38.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-3762 /api/v1/namespaces/watch-3762/configmaps/e2e-watch-test-resource-version 44169cf0-db01-49f5-afe4-2b59d6201ba8 7642914 0 2020-02-11 00:38:37 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:38.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3762" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":146,"skipped":2373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:38.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Feb 11 00:38:52.663: INFO: Pod pod-hostip-dc01a6a5-f180-4e23-8eaf-75199c58277a has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:38:52.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5904" for this suite.

• [SLOW TEST:14.441 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":147,"skipped":2400,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:38:52.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7385.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7385.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7385.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7385.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 229.228.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.228.229_udp@PTR;check="$$(dig +tcp +noall +answer +search 229.228.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.228.229_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7385.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7385.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7385.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7385.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7385.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7385.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 229.228.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.228.229_udp@PTR;check="$$(dig +tcp +noall +answer +search 229.228.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.228.229_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 00:39:05.094: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.098: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.106: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.131: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.135: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.155: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.159: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:05.181: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:10.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.208: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.246: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.252: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.258: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:10.294: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:15.193: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.205: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.210: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.253: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.258: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.263: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.268: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:15.298: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:20.191: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.198: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.204: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.421: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.427: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.430: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.434: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:20.463: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:25.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:25.204: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:25.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:25.217: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:25.257: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:26.143: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:26.156: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:27.403: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:27.626: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:30.192: INFO: Unable to read wheezy_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.205: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.211: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.259: INFO: Unable to read jessie_udp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.273: INFO: Unable to read jessie_tcp@dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.283: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:30.326: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@dns-test-service.dns-7385.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_udp@dns-test-service.dns-7385.svc.cluster.local jessie_tcp@dns-test-service.dns-7385.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:35.202: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local from pod dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189: the server could not find the requested resource (get pods dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189)
Feb 11 00:39:35.285: INFO: Lookups using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-7385.svc.cluster.local]

Feb 11 00:39:40.341: INFO: DNS probes using dns-7385/dns-test-1580f9da-c79a-4e7b-8e2d-d34fb4b6d189 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:39:40.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7385" for this suite.

• [SLOW TEST:48.160 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":148,"skipped":2444,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:39:40.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:39:41.105: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.190402ms)
Feb 11 00:39:41.193: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 87.602618ms)
Feb 11 00:39:41.201: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.568901ms)
Feb 11 00:39:41.207: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.755985ms)
Feb 11 00:39:41.214: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.581202ms)
Feb 11 00:39:41.220: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.873581ms)
Feb 11 00:39:41.227: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.979506ms)
Feb 11 00:39:41.233: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.121762ms)
Feb 11 00:39:41.239: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.486654ms)
Feb 11 00:39:41.245: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.46295ms)
Feb 11 00:39:41.250: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.886391ms)
Feb 11 00:39:41.255: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.24049ms)
Feb 11 00:39:41.259: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.521429ms)
Feb 11 00:39:41.264: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.944346ms)
Feb 11 00:39:41.270: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.953853ms)
Feb 11 00:39:41.276: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.089448ms)
Feb 11 00:39:41.327: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 51.87903ms)
Feb 11 00:39:41.350: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.157384ms)
Feb 11 00:39:41.359: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.619444ms)
Feb 11 00:39:41.368: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.956653ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:39:41.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3597" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":149,"skipped":2459,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:39:41.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 11 00:39:52.119: INFO: Successfully updated pod "labelsupdate1726f9a1-cff3-4dff-bc96-d3347f6d98a6"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:39:54.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1914" for this suite.

• [SLOW TEST:12.849 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2463,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:39:54.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 11 00:39:54.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2038'
Feb 11 00:39:54.667: INFO: stderr: ""
Feb 11 00:39:54.667: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 11 00:40:04.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2038 -o json'
Feb 11 00:40:04.863: INFO: stderr: ""
Feb 11 00:40:04.863: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-11T00:39:54Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-2038\",\n        \"resourceVersion\": \"7643270\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-2038/pods/e2e-test-httpd-pod\",\n        \"uid\": \"7b10c62e-6674-4849-960e-22de1d831256\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-g24lh\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-g24lh\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-g24lh\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-11T00:39:54Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-11T00:40:02Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-11T00:40:02Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-11T00:39:54Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://6592c01c5ef5d31df316e6b42352dfb847bd414fb6476203dd9d1079db99f616\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-11T00:39:59Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.2\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.2\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-11T00:39:54Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 11 00:40:04.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2038'
Feb 11 00:40:06.555: INFO: stderr: ""
Feb 11 00:40:06.555: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Feb 11 00:40:06.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2038'
Feb 11 00:40:13.254: INFO: stderr: ""
Feb 11 00:40:13.254: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:40:13.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2038" for this suite.

• [SLOW TEST:19.037 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":151,"skipped":2474,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:40:13.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-63ab3834-57dc-407c-acac-e55ce24a043f
STEP: Creating a pod to test consume secrets
Feb 11 00:40:13.437: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2" in namespace "projected-7603" to be "success or failure"
Feb 11 00:40:13.477: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.501959ms
Feb 11 00:40:15.484: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046258815s
Feb 11 00:40:17.493: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055386835s
Feb 11 00:40:19.499: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06182224s
Feb 11 00:40:21.510: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072132884s
STEP: Saw pod success
Feb 11 00:40:21.510: INFO: Pod "pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2" satisfied condition "success or failure"
Feb 11 00:40:21.514: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 00:40:21.554: INFO: Waiting for pod pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2 to disappear
Feb 11 00:40:21.561: INFO: Pod pod-projected-secrets-8e393402-238b-4422-914b-1f9b9b85d9d2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:40:21.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7603" for this suite.

• [SLOW TEST:8.344 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2475,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:40:21.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-0e76ea9a-14f5-4968-b92c-d17c1f4982e1
STEP: Creating a pod to test consume configMaps
Feb 11 00:40:21.765: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837" in namespace "projected-8836" to be "success or failure"
Feb 11 00:40:21.777: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837": Phase="Pending", Reason="", readiness=false. Elapsed: 11.654298ms
Feb 11 00:40:23.791: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026206598s
Feb 11 00:40:25.800: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034354271s
Feb 11 00:40:27.807: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041909518s
Feb 11 00:40:29.814: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048729142s
STEP: Saw pod success
Feb 11 00:40:29.814: INFO: Pod "pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837" satisfied condition "success or failure"
Feb 11 00:40:29.817: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 00:40:29.851: INFO: Waiting for pod pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837 to disappear
Feb 11 00:40:29.890: INFO: Pod pod-projected-configmaps-857ae8b6-9500-40c6-8ca2-08220a320837 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:40:29.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8836" for this suite.

• [SLOW TEST:8.285 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2487,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:40:29.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5
Feb 11 00:40:30.162: INFO: Pod name my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5: Found 0 pods out of 1
Feb 11 00:40:35.504: INFO: Pod name my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5: Found 1 pods out of 1
Feb 11 00:40:35.504: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5" are running
Feb 11 00:40:39.518: INFO: Pod "my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5-hp5mr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 00:40:30 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 00:40:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 00:40:30 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 00:40:30 +0000 UTC Reason: Message:}])
Feb 11 00:40:39.519: INFO: Trying to dial the pod
Feb 11 00:40:44.551: INFO: Controller my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5: Got expected result from replica 1 [my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5-hp5mr]: "my-hostname-basic-2265a40e-75ee-42fc-ac58-205cfafce6c5-hp5mr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:40:44.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3062" for this suite.

• [SLOW TEST:14.665 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":154,"skipped":2538,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:40:44.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:41:18.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2556" for this suite.

• [SLOW TEST:34.164 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":155,"skipped":2551,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:41:18.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 11 00:41:18.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3007'
Feb 11 00:41:19.031: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 00:41:19.031: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb 11 00:41:19.086: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-gxt4t]
Feb 11 00:41:19.086: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-gxt4t" in namespace "kubectl-3007" to be "running and ready"
Feb 11 00:41:19.160: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 73.237888ms
Feb 11 00:41:21.173: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08667156s
Feb 11 00:41:23.185: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098274238s
Feb 11 00:41:25.365: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278873899s
Feb 11 00:41:27.372: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.285972368s
Feb 11 00:41:29.380: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.293514881s
Feb 11 00:41:31.388: INFO: Pod "e2e-test-httpd-rc-gxt4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.301977388s
Feb 11 00:41:31.389: INFO: Pod "e2e-test-httpd-rc-gxt4t" satisfied condition "running and ready"
Feb 11 00:41:31.389: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-gxt4t]
Feb 11 00:41:31.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-3007'
Feb 11 00:41:31.610: INFO: stderr: ""
Feb 11 00:41:31.610: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Tue Feb 11 00:41:28.483803 2020] [mpm_event:notice] [pid 1:tid 140593343843176] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Feb 11 00:41:28.483889 2020] [core:notice] [pid 1:tid 140593343843176] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Feb 11 00:41:31.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3007'
Feb 11 00:41:31.795: INFO: stderr: ""
Feb 11 00:41:31.796: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:41:31.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3007" for this suite.

• [SLOW TEST:13.103 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":280,"completed":156,"skipped":2556,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:41:31.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-b1cf9b77-3a1f-4568-b3c0-7ee60f7b8546
STEP: Creating a pod to test consume configMaps
Feb 11 00:41:32.015: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e" in namespace "configmap-5499" to be "success or failure"
Feb 11 00:41:32.043: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.358923ms
Feb 11 00:41:34.051: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036427271s
Feb 11 00:41:36.057: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042332872s
Feb 11 00:41:38.064: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048539545s
Feb 11 00:41:40.073: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057807011s
Feb 11 00:41:42.081: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066185626s
STEP: Saw pod success
Feb 11 00:41:42.081: INFO: Pod "pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e" satisfied condition "success or failure"
Feb 11 00:41:42.102: INFO: Trying to get logs from node jerma-node pod pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e container configmap-volume-test: 
STEP: delete the pod
Feb 11 00:41:42.204: INFO: Waiting for pod pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e to disappear
Feb 11 00:41:42.216: INFO: Pod pod-configmaps-bb5225a2-6989-4c52-8880-a0bed9bf3c3e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:41:42.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5499" for this suite.

• [SLOW TEST:10.388 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":157,"skipped":2574,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:41:42.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-b2xrp in namespace proxy-5802
I0211 00:41:42.379283       9 runners.go:189] Created replication controller with name: proxy-service-b2xrp, namespace: proxy-5802, replica count: 1
I0211 00:41:43.430668       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:44.431388       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:45.432659       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:46.433277       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:47.434272       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:48.435027       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:49.435707       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 00:41:50.436300       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:51.436860       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:52.437778       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:53.438341       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:54.438888       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:55.439319       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:56.439696       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:57.440276       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:58.440822       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0211 00:41:59.441302       9 runners.go:189] proxy-service-b2xrp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 00:41:59.449: INFO: setup took 17.099114038s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 11 00:41:59.482: INFO: (0) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 31.92198ms)
Feb 11 00:41:59.483: INFO: (0) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 33.686292ms)
Feb 11 00:41:59.484: INFO: (0) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 34.367581ms)
Feb 11 00:41:59.484: INFO: (0) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 34.614023ms)
Feb 11 00:41:59.486: INFO: (0) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 36.559688ms)
Feb 11 00:41:59.486: INFO: (0) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 36.91199ms)
Feb 11 00:41:59.487: INFO: (0) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 37.724985ms)
Feb 11 00:41:59.487: INFO: (0) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 37.941222ms)
Feb 11 00:41:59.488: INFO: (0) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 38.208992ms)
Feb 11 00:41:59.494: INFO: (0) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 44.353051ms)
Feb 11 00:41:59.494: INFO: (0) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 44.608408ms)
Feb 11 00:41:59.506: INFO: (0) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 56.621898ms)
Feb 11 00:41:59.506: INFO: (0) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 56.600106ms)
Feb 11 00:41:59.506: INFO: (0) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 56.579376ms)
Feb 11 00:41:59.506: INFO: (0) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 56.577516ms)
Feb 11 00:41:59.506: INFO: (0) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 20.695245ms)
Feb 11 00:41:59.528: INFO: (1) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 21.127892ms)
Feb 11 00:41:59.528: INFO: (1) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 20.701089ms)
Feb 11 00:41:59.528: INFO: (1) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 21.122645ms)
Feb 11 00:41:59.529: INFO: (1) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 21.795338ms)
Feb 11 00:41:59.529: INFO: (1) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 22.718899ms)
Feb 11 00:41:59.530: INFO: (1) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 22.842155ms)
Feb 11 00:41:59.531: INFO: (1) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 23.463088ms)
Feb 11 00:41:59.531: INFO: (1) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 24.317875ms)
Feb 11 00:41:59.542: INFO: (2) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 10.683033ms)
Feb 11 00:41:59.544: INFO: (2) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 12.102673ms)
Feb 11 00:41:59.546: INFO: (2) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 14.402634ms)
Feb 11 00:41:59.546: INFO: (2) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 14.65157ms)
Feb 11 00:41:59.546: INFO: (2) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 14.706722ms)
Feb 11 00:41:59.546: INFO: (2) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 14.781294ms)
Feb 11 00:41:59.546: INFO: (2) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 15.004569ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 15.668794ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 15.703747ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 15.704493ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 15.786042ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 15.970726ms)
Feb 11 00:41:59.547: INFO: (2) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 517.229076ms)
Feb 11 00:42:00.068: INFO: (3) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 518.047488ms)
Feb 11 00:42:00.068: INFO: (3) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 518.18677ms)
Feb 11 00:42:00.069: INFO: (3) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: ... (200; 519.205088ms)
Feb 11 00:42:00.070: INFO: (3) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 519.351098ms)
Feb 11 00:42:00.070: INFO: (3) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 519.409126ms)
Feb 11 00:42:00.073: INFO: (3) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 523.067756ms)
Feb 11 00:42:00.074: INFO: (3) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 524.2713ms)
Feb 11 00:42:00.075: INFO: (3) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 524.193618ms)
Feb 11 00:42:00.076: INFO: (3) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 525.488893ms)
Feb 11 00:42:00.076: INFO: (3) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 525.749153ms)
Feb 11 00:42:00.076: INFO: (3) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 525.615567ms)
Feb 11 00:42:00.077: INFO: (3) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 526.90758ms)
Feb 11 00:42:00.078: INFO: (3) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 527.429917ms)
Feb 11 00:42:00.124: INFO: (4) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 46.423131ms)
Feb 11 00:42:00.127: INFO: (4) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 48.978783ms)
Feb 11 00:42:00.127: INFO: (4) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 49.496752ms)
Feb 11 00:42:00.128: INFO: (4) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 49.916644ms)
Feb 11 00:42:00.128: INFO: (4) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 50.147051ms)
Feb 11 00:42:00.128: INFO: (4) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 50.059902ms)
Feb 11 00:42:00.129: INFO: (4) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 50.730885ms)
Feb 11 00:42:00.129: INFO: (4) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 50.623937ms)
Feb 11 00:42:00.129: INFO: (4) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 50.85844ms)
Feb 11 00:42:00.129: INFO: (4) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 50.904631ms)
Feb 11 00:42:00.133: INFO: (4) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 55.422667ms)
Feb 11 00:42:00.134: INFO: (4) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 56.080246ms)
Feb 11 00:42:00.134: INFO: (4) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 56.003134ms)
Feb 11 00:42:00.135: INFO: (4) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 56.311955ms)
Feb 11 00:42:00.135: INFO: (4) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 56.504584ms)
Feb 11 00:42:00.147: INFO: (5) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 12.324141ms)
Feb 11 00:42:00.148: INFO: (5) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 13.03808ms)
Feb 11 00:42:00.149: INFO: (5) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 13.337647ms)
Feb 11 00:42:00.151: INFO: (5) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 16.284515ms)
Feb 11 00:42:00.154: INFO: (5) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 18.920971ms)
Feb 11 00:42:00.154: INFO: (5) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 19.176247ms)
Feb 11 00:42:00.155: INFO: (5) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 20.012187ms)
Feb 11 00:42:00.156: INFO: (5) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 20.819955ms)
Feb 11 00:42:00.156: INFO: (5) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 20.523039ms)
Feb 11 00:42:00.156: INFO: (5) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 20.7863ms)
Feb 11 00:42:00.157: INFO: (5) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 21.223222ms)
Feb 11 00:42:00.157: INFO: (5) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test (200; 29.894547ms)
Feb 11 00:42:00.188: INFO: (6) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 29.885233ms)
Feb 11 00:42:00.188: INFO: (6) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 30.154058ms)
Feb 11 00:42:00.188: INFO: (6) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 29.968092ms)
Feb 11 00:42:00.196: INFO: (6) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 37.889152ms)
Feb 11 00:42:00.196: INFO: (6) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 37.921292ms)
Feb 11 00:42:00.197: INFO: (6) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: ... (200; 46.755728ms)
Feb 11 00:42:00.208: INFO: (6) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 49.640688ms)
Feb 11 00:42:00.216: INFO: (7) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 8.018262ms)
Feb 11 00:42:00.222: INFO: (7) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 15.357429ms)
Feb 11 00:42:00.225: INFO: (7) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 16.865435ms)
Feb 11 00:42:00.228: INFO: (7) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 19.81426ms)
Feb 11 00:42:00.229: INFO: (7) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 20.312986ms)
Feb 11 00:42:00.229: INFO: (7) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 20.297527ms)
Feb 11 00:42:00.229: INFO: (7) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 20.189112ms)
Feb 11 00:42:00.229: INFO: (7) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 20.330205ms)
Feb 11 00:42:00.230: INFO: (7) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 20.939861ms)
Feb 11 00:42:00.231: INFO: (7) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 22.661725ms)
Feb 11 00:42:00.232: INFO: (7) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 23.449642ms)
Feb 11 00:42:00.232: INFO: (7) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 23.573055ms)
Feb 11 00:42:00.232: INFO: (7) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 23.503739ms)
Feb 11 00:42:00.232: INFO: (7) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 23.459118ms)
Feb 11 00:42:00.233: INFO: (7) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 24.166229ms)
Feb 11 00:42:00.243: INFO: (8) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.91136ms)
Feb 11 00:42:00.244: INFO: (8) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 10.292196ms)
Feb 11 00:42:00.245: INFO: (8) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 12.476514ms)
Feb 11 00:42:00.246: INFO: (8) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 12.695055ms)
Feb 11 00:42:00.246: INFO: (8) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 12.912088ms)
Feb 11 00:42:00.246: INFO: (8) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 13.083664ms)
Feb 11 00:42:00.246: INFO: (8) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 13.206426ms)
Feb 11 00:42:00.247: INFO: (8) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 13.915103ms)
Feb 11 00:42:00.250: INFO: (8) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 17.175809ms)
Feb 11 00:42:00.250: INFO: (8) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 17.200507ms)
Feb 11 00:42:00.250: INFO: (8) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 17.315905ms)
Feb 11 00:42:00.251: INFO: (8) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 17.332282ms)
Feb 11 00:42:00.251: INFO: (8) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 17.634595ms)
Feb 11 00:42:00.251: INFO: (8) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 18.008847ms)
Feb 11 00:42:00.251: INFO: (8) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 18.312163ms)
Feb 11 00:42:00.262: INFO: (9) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 10.14086ms)
Feb 11 00:42:00.262: INFO: (9) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 10.389283ms)
Feb 11 00:42:00.262: INFO: (9) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 10.363685ms)
Feb 11 00:42:00.262: INFO: (9) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 10.403312ms)
Feb 11 00:42:00.263: INFO: (9) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 10.864125ms)
Feb 11 00:42:00.263: INFO: (9) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 10.977397ms)
Feb 11 00:42:00.263: INFO: (9) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 11.737072ms)
Feb 11 00:42:00.264: INFO: (9) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 12.664616ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 17.8591ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 18.429903ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 18.519844ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 18.691573ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 18.535598ms)
Feb 11 00:42:00.270: INFO: (9) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 18.644456ms)
Feb 11 00:42:00.271: INFO: (9) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 19.542797ms)
Feb 11 00:42:00.280: INFO: (10) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 8.448221ms)
Feb 11 00:42:00.281: INFO: (10) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.38789ms)
Feb 11 00:42:00.282: INFO: (10) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 10.000937ms)
Feb 11 00:42:00.283: INFO: (10) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 11.486041ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 12.403422ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 12.190733ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 12.358153ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 12.487729ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 12.429735ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 12.330047ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 12.498872ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 12.632648ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 12.652354ms)
Feb 11 00:42:00.284: INFO: (10) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 12.651262ms)
Feb 11 00:42:00.285: INFO: (10) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 13.800487ms)
Feb 11 00:42:00.290: INFO: (11) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 4.694239ms)
Feb 11 00:42:00.296: INFO: (11) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 10.144532ms)
Feb 11 00:42:00.296: INFO: (11) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 10.675773ms)
Feb 11 00:42:00.297: INFO: (11) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 11.92016ms)
Feb 11 00:42:00.298: INFO: (11) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 12.626905ms)
Feb 11 00:42:00.299: INFO: (11) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 13.05311ms)
Feb 11 00:42:00.299: INFO: (11) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 13.105392ms)
Feb 11 00:42:00.299: INFO: (11) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 13.270096ms)
Feb 11 00:42:00.299: INFO: (11) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 13.497387ms)
Feb 11 00:42:00.307: INFO: (11) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 21.405086ms)
Feb 11 00:42:00.308: INFO: (11) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 22.153648ms)
Feb 11 00:42:00.308: INFO: (11) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 22.476405ms)
Feb 11 00:42:00.308: INFO: (11) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 22.674744ms)
Feb 11 00:42:00.308: INFO: (11) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 22.777514ms)
Feb 11 00:42:00.309: INFO: (11) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 14.27052ms)
Feb 11 00:42:00.325: INFO: (12) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 16.190938ms)
Feb 11 00:42:00.325: INFO: (12) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 16.169941ms)
Feb 11 00:42:00.326: INFO: (12) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 16.599589ms)
Feb 11 00:42:00.326: INFO: (12) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 16.9024ms)
Feb 11 00:42:00.326: INFO: (12) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 17.061163ms)
Feb 11 00:42:00.327: INFO: (12) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test (200; 6.205893ms)
Feb 11 00:42:00.338: INFO: (13) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 7.106013ms)
Feb 11 00:42:00.338: INFO: (13) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 7.02249ms)
Feb 11 00:42:00.339: INFO: (13) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 8.407781ms)
Feb 11 00:42:00.339: INFO: (13) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 8.471199ms)
Feb 11 00:42:00.340: INFO: (13) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 8.542282ms)
Feb 11 00:42:00.340: INFO: (13) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test (200; 12.534716ms)
Feb 11 00:42:00.357: INFO: (14) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 12.784745ms)
Feb 11 00:42:00.357: INFO: (14) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 12.684974ms)
Feb 11 00:42:00.357: INFO: (14) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 12.733326ms)
Feb 11 00:42:00.361: INFO: (15) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 3.323817ms)
Feb 11 00:42:00.362: INFO: (15) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 4.591712ms)
Feb 11 00:42:00.362: INFO: (15) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 4.435289ms)
Feb 11 00:42:00.363: INFO: (15) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 5.174027ms)
Feb 11 00:42:00.367: INFO: (15) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.335479ms)
Feb 11 00:42:00.368: INFO: (15) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 10.862005ms)
Feb 11 00:42:00.368: INFO: (15) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 10.57642ms)
Feb 11 00:42:00.369: INFO: (15) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 11.165575ms)
Feb 11 00:42:00.373: INFO: (15) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 14.690198ms)
Feb 11 00:42:00.373: INFO: (15) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 9.740111ms)
Feb 11 00:42:00.386: INFO: (16) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 10.063041ms)
Feb 11 00:42:00.386: INFO: (16) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 9.987085ms)
Feb 11 00:42:00.386: INFO: (16) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 10.054128ms)
Feb 11 00:42:00.388: INFO: (16) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 12.08613ms)
Feb 11 00:42:00.389: INFO: (16) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 13.162248ms)
Feb 11 00:42:00.390: INFO: (16) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 13.465868ms)
Feb 11 00:42:00.390: INFO: (16) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 13.553023ms)
Feb 11 00:42:00.390: INFO: (16) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 13.573837ms)
Feb 11 00:42:00.390: INFO: (16) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 13.642731ms)
Feb 11 00:42:00.390: INFO: (16) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 13.963856ms)
Feb 11 00:42:00.391: INFO: (16) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 15.413057ms)
Feb 11 00:42:00.401: INFO: (17) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 8.81809ms)
Feb 11 00:42:00.401: INFO: (17) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 9.070742ms)
Feb 11 00:42:00.401: INFO: (17) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 9.254041ms)
Feb 11 00:42:00.401: INFO: (17) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.584582ms)
Feb 11 00:42:00.402: INFO: (17) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 10.142582ms)
Feb 11 00:42:00.402: INFO: (17) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 10.114815ms)
Feb 11 00:42:00.402: INFO: (17) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 10.295952ms)
Feb 11 00:42:00.402: INFO: (17) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 10.662182ms)
Feb 11 00:42:00.403: INFO: (17) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test (200; 10.948136ms)
Feb 11 00:42:00.403: INFO: (17) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 11.05046ms)
Feb 11 00:42:00.403: INFO: (17) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 11.032367ms)
Feb 11 00:42:00.407: INFO: (17) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 15.178668ms)
Feb 11 00:42:00.407: INFO: (17) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 15.396782ms)
Feb 11 00:42:00.417: INFO: (18) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.05068ms)
Feb 11 00:42:00.417: INFO: (18) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 9.8859ms)
Feb 11 00:42:00.418: INFO: (18) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 11.039202ms)
Feb 11 00:42:00.418: INFO: (18) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 10.776247ms)
Feb 11 00:42:00.418: INFO: (18) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 11.050971ms)
Feb 11 00:42:00.419: INFO: (18) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 11.061227ms)
Feb 11 00:42:00.419: INFO: (18) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:1080/proxy/: test<... (200; 11.165201ms)
Feb 11 00:42:00.419: INFO: (18) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 11.526544ms)
Feb 11 00:42:00.419: INFO: (18) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 12.151848ms)
Feb 11 00:42:00.420: INFO: (18) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx/proxy/: test (200; 12.186361ms)
Feb 11 00:42:00.420: INFO: (18) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test<... (200; 5.07843ms)
Feb 11 00:42:00.428: INFO: (19) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:162/proxy/: bar (200; 6.357789ms)
Feb 11 00:42:00.428: INFO: (19) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:462/proxy/: tls qux (200; 6.225615ms)
Feb 11 00:42:00.428: INFO: (19) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:1080/proxy/: ... (200; 6.326598ms)
Feb 11 00:42:00.428: INFO: (19) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:443/proxy/: test (200; 7.229167ms)
Feb 11 00:42:00.431: INFO: (19) /api/v1/namespaces/proxy-5802/pods/proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 8.455085ms)
Feb 11 00:42:00.431: INFO: (19) /api/v1/namespaces/proxy-5802/pods/http:proxy-service-b2xrp-xnxvx:160/proxy/: foo (200; 9.065674ms)
Feb 11 00:42:00.431: INFO: (19) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname2/proxy/: tls qux (200; 9.618009ms)
Feb 11 00:42:00.432: INFO: (19) /api/v1/namespaces/proxy-5802/pods/https:proxy-service-b2xrp-xnxvx:460/proxy/: tls baz (200; 9.597893ms)
Feb 11 00:42:00.435: INFO: (19) /api/v1/namespaces/proxy-5802/services/https:proxy-service-b2xrp:tlsportname1/proxy/: tls baz (200; 12.887151ms)
Feb 11 00:42:00.435: INFO: (19) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname2/proxy/: bar (200; 13.307534ms)
Feb 11 00:42:00.435: INFO: (19) /api/v1/namespaces/proxy-5802/services/proxy-service-b2xrp:portname1/proxy/: foo (200; 13.246634ms)
Feb 11 00:42:00.436: INFO: (19) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname2/proxy/: bar (200; 13.457644ms)
Feb 11 00:42:00.436: INFO: (19) /api/v1/namespaces/proxy-5802/services/http:proxy-service-b2xrp:portname1/proxy/: foo (200; 13.772998ms)
STEP: deleting ReplicationController proxy-service-b2xrp in namespace proxy-5802, will wait for the garbage collector to delete the pods
Feb 11 00:42:00.498: INFO: Deleting ReplicationController proxy-service-b2xrp took: 8.791421ms
Feb 11 00:42:00.799: INFO: Terminating ReplicationController proxy-service-b2xrp pods took: 300.847596ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:42:05.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5802" for this suite.

• [SLOW TEST:22.983 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":280,"completed":158,"skipped":2589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:42:05.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-84a33a61-7efa-463f-9acb-6708b2705fba
STEP: Creating a pod to test consume configMaps
Feb 11 00:42:05.338: INFO: Waiting up to 5m0s for pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe" in namespace "configmap-3381" to be "success or failure"
Feb 11 00:42:05.348: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.876407ms
Feb 11 00:42:07.358: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019472451s
Feb 11 00:42:09.367: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028422499s
Feb 11 00:42:11.389: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051062187s
Feb 11 00:42:13.403: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065025772s
STEP: Saw pod success
Feb 11 00:42:13.404: INFO: Pod "pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe" satisfied condition "success or failure"
Feb 11 00:42:13.409: INFO: Trying to get logs from node jerma-node pod pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe container configmap-volume-test: 
STEP: delete the pod
Feb 11 00:42:13.450: INFO: Waiting for pod pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe to disappear
Feb 11 00:42:13.455: INFO: Pod pod-configmaps-815c0184-7c25-43ed-848f-77b94ade00fe no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:42:13.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3381" for this suite.

• [SLOW TEST:8.256 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":159,"skipped":2645,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:42:13.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5324;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5324;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5324.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5324.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.67_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5324;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5324;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5324.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5324.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 67.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.67_udp@PTR;check="$$(dig +tcp +noall +answer +search 67.116.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.116.67_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 00:42:26.931: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.938: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.946: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.953: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:26.975: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.018: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.023: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.027: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.037: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.042: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.054: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.061: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:27.097: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:32.109: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.115: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.122: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.128: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.133: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.147: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.207: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.211: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.216: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.220: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.225: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.231: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:32.274: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:37.124: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.145: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.155: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.166: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.213: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.246: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.250: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.254: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.260: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.263: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.273: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:37.294: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:42.105: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.111: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.115: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.126: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.228: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.238: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.244: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.257: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.261: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.270: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:42.337: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:47.110: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.114: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.118: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.121: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.154: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.159: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.162: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.196: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.203: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.209: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.212: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.223: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:47.310: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:52.111: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.118: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.123: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.145: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.156: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.161: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.194: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.198: INFO: Unable to read jessie_udp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324 from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.203: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.212: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc from pod dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8: the server could not find the requested resource (get pods dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8)
Feb 11 00:42:52.256: INFO: Lookups using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5324 wheezy_tcp@dns-test-service.dns-5324 wheezy_udp@dns-test-service.dns-5324.svc wheezy_tcp@dns-test-service.dns-5324.svc wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5324 jessie_tcp@dns-test-service.dns-5324 jessie_udp@dns-test-service.dns-5324.svc jessie_tcp@dns-test-service.dns-5324.svc jessie_udp@_http._tcp.dns-test-service.dns-5324.svc jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc]

Feb 11 00:42:57.260: INFO: DNS probes using dns-5324/dns-test-da7a65c6-e33b-4aac-a8f9-e4256eb3bfa8 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:42:57.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5324" for this suite.

• [SLOW TEST:44.447 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":160,"skipped":2670,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:42:57.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 11 00:42:58.140: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 11 00:43:03.173: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:43:04.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5667" for this suite.

• [SLOW TEST:6.380 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":161,"skipped":2679,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:43:04.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-9074
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 00:43:04.546: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 11 00:43:04.626: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:06.640: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:08.638: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:10.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:12.724: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:14.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:16.637: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:18.634: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 00:43:20.631: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:22.633: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:24.638: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:26.640: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:28.634: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:30.666: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:32.638: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 00:43:34.646: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 11 00:43:34.674: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 11 00:43:42.895: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.3:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9074 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 00:43:42.895: INFO: >>> kubeConfig: /root/.kube/config
I0211 00:43:42.998651       9 log.go:172] (0xc002cb8580) (0xc0028e3040) Create stream
I0211 00:43:42.998806       9 log.go:172] (0xc002cb8580) (0xc0028e3040) Stream added, broadcasting: 1
I0211 00:43:43.004450       9 log.go:172] (0xc002cb8580) Reply frame received for 1
I0211 00:43:43.004501       9 log.go:172] (0xc002cb8580) (0xc0018460a0) Create stream
I0211 00:43:43.004517       9 log.go:172] (0xc002cb8580) (0xc0018460a0) Stream added, broadcasting: 3
I0211 00:43:43.006166       9 log.go:172] (0xc002cb8580) Reply frame received for 3
I0211 00:43:43.006196       9 log.go:172] (0xc002cb8580) (0xc002251360) Create stream
I0211 00:43:43.006207       9 log.go:172] (0xc002cb8580) (0xc002251360) Stream added, broadcasting: 5
I0211 00:43:43.008623       9 log.go:172] (0xc002cb8580) Reply frame received for 5
I0211 00:43:43.150515       9 log.go:172] (0xc002cb8580) Data frame received for 3
I0211 00:43:43.150657       9 log.go:172] (0xc0018460a0) (3) Data frame handling
I0211 00:43:43.150675       9 log.go:172] (0xc0018460a0) (3) Data frame sent
I0211 00:43:43.265188       9 log.go:172] (0xc002cb8580) Data frame received for 1
I0211 00:43:43.265275       9 log.go:172] (0xc002cb8580) (0xc0018460a0) Stream removed, broadcasting: 3
I0211 00:43:43.265315       9 log.go:172] (0xc0028e3040) (1) Data frame handling
I0211 00:43:43.265346       9 log.go:172] (0xc002cb8580) (0xc002251360) Stream removed, broadcasting: 5
I0211 00:43:43.265391       9 log.go:172] (0xc0028e3040) (1) Data frame sent
I0211 00:43:43.265411       9 log.go:172] (0xc002cb8580) (0xc0028e3040) Stream removed, broadcasting: 1
I0211 00:43:43.265439       9 log.go:172] (0xc002cb8580) Go away received
I0211 00:43:43.265640       9 log.go:172] (0xc002cb8580) (0xc0028e3040) Stream removed, broadcasting: 1
I0211 00:43:43.265664       9 log.go:172] (0xc002cb8580) (0xc0018460a0) Stream removed, broadcasting: 3
I0211 00:43:43.265681       9 log.go:172] (0xc002cb8580) (0xc002251360) Stream removed, broadcasting: 5
Feb 11 00:43:43.265: INFO: Found all expected endpoints: [netserver-0]
Feb 11 00:43:43.893: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9074 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 00:43:43.893: INFO: >>> kubeConfig: /root/.kube/config
I0211 00:43:43.993810       9 log.go:172] (0xc002cb88f0) (0xc0028e3180) Create stream
I0211 00:43:43.994000       9 log.go:172] (0xc002cb88f0) (0xc0028e3180) Stream added, broadcasting: 1
I0211 00:43:43.998767       9 log.go:172] (0xc002cb88f0) Reply frame received for 1
I0211 00:43:43.998815       9 log.go:172] (0xc002cb88f0) (0xc0018461e0) Create stream
I0211 00:43:43.998833       9 log.go:172] (0xc002cb88f0) (0xc0018461e0) Stream added, broadcasting: 3
I0211 00:43:44.000825       9 log.go:172] (0xc002cb88f0) Reply frame received for 3
I0211 00:43:44.000865       9 log.go:172] (0xc002cb88f0) (0xc0014af4a0) Create stream
I0211 00:43:44.000905       9 log.go:172] (0xc002cb88f0) (0xc0014af4a0) Stream added, broadcasting: 5
I0211 00:43:44.002605       9 log.go:172] (0xc002cb88f0) Reply frame received for 5
I0211 00:43:44.122196       9 log.go:172] (0xc002cb88f0) Data frame received for 3
I0211 00:43:44.122348       9 log.go:172] (0xc0018461e0) (3) Data frame handling
I0211 00:43:44.122370       9 log.go:172] (0xc0018461e0) (3) Data frame sent
I0211 00:43:44.218238       9 log.go:172] (0xc002cb88f0) Data frame received for 1
I0211 00:43:44.218347       9 log.go:172] (0xc002cb88f0) (0xc0018461e0) Stream removed, broadcasting: 3
I0211 00:43:44.218381       9 log.go:172] (0xc0028e3180) (1) Data frame handling
I0211 00:43:44.218398       9 log.go:172] (0xc0028e3180) (1) Data frame sent
I0211 00:43:44.218421       9 log.go:172] (0xc002cb88f0) (0xc0014af4a0) Stream removed, broadcasting: 5
I0211 00:43:44.218479       9 log.go:172] (0xc002cb88f0) (0xc0028e3180) Stream removed, broadcasting: 1
I0211 00:43:44.218509       9 log.go:172] (0xc002cb88f0) Go away received
I0211 00:43:44.218726       9 log.go:172] (0xc002cb88f0) (0xc0028e3180) Stream removed, broadcasting: 1
I0211 00:43:44.218741       9 log.go:172] (0xc002cb88f0) (0xc0018461e0) Stream removed, broadcasting: 3
I0211 00:43:44.218745       9 log.go:172] (0xc002cb88f0) (0xc0014af4a0) Stream removed, broadcasting: 5
Feb 11 00:43:44.218: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:43:44.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9074" for this suite.

• [SLOW TEST:39.933 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":162,"skipped":2683,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:43:44.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 11 00:43:44.366: INFO: Waiting up to 5m0s for pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451" in namespace "emptydir-3599" to be "success or failure"
Feb 11 00:43:44.379: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451": Phase="Pending", Reason="", readiness=false. Elapsed: 12.628178ms
Feb 11 00:43:46.387: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020064179s
Feb 11 00:43:48.393: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026451455s
Feb 11 00:43:50.895: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528044025s
Feb 11 00:43:52.903: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.535757716s
STEP: Saw pod success
Feb 11 00:43:52.903: INFO: Pod "pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451" satisfied condition "success or failure"
Feb 11 00:43:52.907: INFO: Trying to get logs from node jerma-node pod pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451 container test-container: 
STEP: delete the pod
Feb 11 00:43:54.161: INFO: Waiting for pod pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451 to disappear
Feb 11 00:43:54.207: INFO: Pod pod-1a09d746-03b6-40b2-9dd0-2a8bd1799451 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:43:54.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3599" for this suite.

• [SLOW TEST:10.406 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2691,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:43:54.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:43:55.589: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb 11 00:43:58.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978638, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:00.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978638, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:02.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978638, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:04.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978638, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978635, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:44:07.386: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:44:07.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-55" for this suite.
STEP: Destroying namespace "webhook-55-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.413 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":164,"skipped":2694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:44:08.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:44:17.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7269" for this suite.

• [SLOW TEST:9.258 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":165,"skipped":2720,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:44:17.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:44:18.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 00:44:20.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:22.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:24.193: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:26.128: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978658, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:44:29.175: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
Feb 11 00:44:31.368: INFO: Waiting for webhook configuration to be ready...
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:44:41.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5964" for this suite.
STEP: Destroying namespace "webhook-5964-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:24.635 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":166,"skipped":2732,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:44:41.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:44:43.361: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 00:44:45.386: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:49.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:49.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:51.395: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:44:53.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716978683, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:44:56.430: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 11 00:44:56.588: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:44:56.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5719" for this suite.
STEP: Destroying namespace "webhook-5719-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.841 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":167,"skipped":2748,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:44:56.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 11 00:44:59.999: INFO: Pod name wrapped-volume-race-9ad71b08-722b-42db-a2a5-c0023147551e: Found 0 pods out of 5
Feb 11 00:45:05.014: INFO: Pod name wrapped-volume-race-9ad71b08-722b-42db-a2a5-c0023147551e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9ad71b08-722b-42db-a2a5-c0023147551e in namespace emptydir-wrapper-3519, will wait for the garbage collector to delete the pods
Feb 11 00:45:33.117: INFO: Deleting ReplicationController wrapped-volume-race-9ad71b08-722b-42db-a2a5-c0023147551e took: 11.309918ms
Feb 11 00:45:33.618: INFO: Terminating ReplicationController wrapped-volume-race-9ad71b08-722b-42db-a2a5-c0023147551e pods took: 500.950061ms
STEP: Creating RC which spawns configmap-volume pods
Feb 11 00:45:53.493: INFO: Pod name wrapped-volume-race-52001403-9b9d-4d73-9a02-cba0a680b2ff: Found 0 pods out of 5
Feb 11 00:45:58.509: INFO: Pod name wrapped-volume-race-52001403-9b9d-4d73-9a02-cba0a680b2ff: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-52001403-9b9d-4d73-9a02-cba0a680b2ff in namespace emptydir-wrapper-3519, will wait for the garbage collector to delete the pods
Feb 11 00:46:30.629: INFO: Deleting ReplicationController wrapped-volume-race-52001403-9b9d-4d73-9a02-cba0a680b2ff took: 16.655439ms
Feb 11 00:46:31.031: INFO: Terminating ReplicationController wrapped-volume-race-52001403-9b9d-4d73-9a02-cba0a680b2ff pods took: 401.285412ms
STEP: Creating RC which spawns configmap-volume pods
Feb 11 00:46:46.107: INFO: Pod name wrapped-volume-race-c3062a96-fff3-4ccf-9e19-a0bd2363df10: Found 0 pods out of 5
Feb 11 00:46:51.118: INFO: Pod name wrapped-volume-race-c3062a96-fff3-4ccf-9e19-a0bd2363df10: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c3062a96-fff3-4ccf-9e19-a0bd2363df10 in namespace emptydir-wrapper-3519, will wait for the garbage collector to delete the pods
Feb 11 00:47:19.266: INFO: Deleting ReplicationController wrapped-volume-race-c3062a96-fff3-4ccf-9e19-a0bd2363df10 took: 49.452703ms
Feb 11 00:47:19.667: INFO: Terminating ReplicationController wrapped-volume-race-c3062a96-fff3-4ccf-9e19-a0bd2363df10 pods took: 400.799072ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:47:33.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3519" for this suite.

• [SLOW TEST:157.003 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":168,"skipped":2753,"failed":0}
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:47:33.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-b84f
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 00:47:33.985: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b84f" in namespace "subpath-1651" to be "success or failure"
Feb 11 00:47:33.996: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712205ms
Feb 11 00:47:36.005: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019691849s
Feb 11 00:47:38.489: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.503636573s
Feb 11 00:47:40.515: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.53058523s
Feb 11 00:47:42.526: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.540897139s
Feb 11 00:47:44.536: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 10.551091416s
Feb 11 00:47:46.544: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 12.559241708s
Feb 11 00:47:48.558: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 14.572985905s
Feb 11 00:47:50.568: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 16.582886843s
Feb 11 00:47:52.623: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 18.638444248s
Feb 11 00:47:54.633: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 20.647685728s
Feb 11 00:47:56.638: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 22.653273034s
Feb 11 00:47:58.649: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 24.663648393s
Feb 11 00:48:00.657: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 26.672564919s
Feb 11 00:48:02.663: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Running", Reason="", readiness=true. Elapsed: 28.677649928s
Feb 11 00:48:04.668: INFO: Pod "pod-subpath-test-configmap-b84f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.682996839s
STEP: Saw pod success
Feb 11 00:48:04.668: INFO: Pod "pod-subpath-test-configmap-b84f" satisfied condition "success or failure"
Feb 11 00:48:04.672: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-b84f container test-container-subpath-configmap-b84f: 
STEP: delete the pod
Feb 11 00:48:04.722: INFO: Waiting for pod pod-subpath-test-configmap-b84f to disappear
Feb 11 00:48:04.726: INFO: Pod pod-subpath-test-configmap-b84f no longer exists
STEP: Deleting pod pod-subpath-test-configmap-b84f
Feb 11 00:48:04.726: INFO: Deleting pod "pod-subpath-test-configmap-b84f" in namespace "subpath-1651"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:48:04.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1651" for this suite.

• [SLOW TEST:30.944 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":169,"skipped":2756,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:48:04.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466
STEP: creating an pod
Feb 11 00:48:04.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7100 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 11 00:48:07.273: INFO: stderr: ""
Feb 11 00:48:07.273: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Feb 11 00:48:07.274: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 11 00:48:07.274: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7100" to be "running and ready, or succeeded"
Feb 11 00:48:07.328: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 54.235115ms
Feb 11 00:48:09.808: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534144203s
Feb 11 00:48:11.817: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542461488s
Feb 11 00:48:13.829: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.554368739s
Feb 11 00:48:13.829: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 11 00:48:13.829: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 11 00:48:13.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100'
Feb 11 00:48:14.001: INFO: stderr: ""
Feb 11 00:48:14.001: INFO: stdout: "I0211 00:48:12.667942       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/xx4 474\nI0211 00:48:12.868413       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/l6kh 354\nI0211 00:48:13.068375       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/p9d 572\nI0211 00:48:13.268364       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/gvw4 263\nI0211 00:48:13.468399       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/c6dd 490\nI0211 00:48:13.668370       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tgvb 383\nI0211 00:48:13.868363       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/shb 444\n"
STEP: limiting log lines
Feb 11 00:48:14.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100 --tail=1'
Feb 11 00:48:14.257: INFO: stderr: ""
Feb 11 00:48:14.257: INFO: stdout: "I0211 00:48:14.068482       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mgt 270\n"
Feb 11 00:48:14.258: INFO: got output "I0211 00:48:14.068482       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mgt 270\n"
STEP: limiting log bytes
Feb 11 00:48:14.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100 --limit-bytes=1'
Feb 11 00:48:14.380: INFO: stderr: ""
Feb 11 00:48:14.380: INFO: stdout: "I"
Feb 11 00:48:14.380: INFO: got output "I"
STEP: exposing timestamps
Feb 11 00:48:14.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100 --tail=1 --timestamps'
Feb 11 00:48:14.479: INFO: stderr: ""
Feb 11 00:48:14.479: INFO: stdout: "2020-02-11T00:48:14.269298471Z I0211 00:48:14.268493       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/zd6k 478\n"
Feb 11 00:48:14.480: INFO: got output "2020-02-11T00:48:14.269298471Z I0211 00:48:14.268493       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/zd6k 478\n"
STEP: restricting to a time range
Feb 11 00:48:16.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100 --since=1s'
Feb 11 00:48:17.175: INFO: stderr: ""
Feb 11 00:48:17.175: INFO: stdout: "I0211 00:48:16.268490       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/x4vj 548\nI0211 00:48:16.468365       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/7q46 257\nI0211 00:48:16.668378       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/zlk 521\nI0211 00:48:16.868379       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4d6q 443\nI0211 00:48:17.068399       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/tdd 396\n"
Feb 11 00:48:17.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7100 --since=24h'
Feb 11 00:48:17.301: INFO: stderr: ""
Feb 11 00:48:17.301: INFO: stdout: "I0211 00:48:12.667942       1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/xx4 474\nI0211 00:48:12.868413       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/l6kh 354\nI0211 00:48:13.068375       1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/p9d 572\nI0211 00:48:13.268364       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/gvw4 263\nI0211 00:48:13.468399       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/c6dd 490\nI0211 00:48:13.668370       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tgvb 383\nI0211 00:48:13.868363       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/shb 444\nI0211 00:48:14.068482       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mgt 270\nI0211 00:48:14.268493       1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/zd6k 478\nI0211 00:48:14.468436       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/lln 414\nI0211 00:48:14.669758       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/2gj 415\nI0211 00:48:14.868420       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/xtwh 220\nI0211 00:48:15.068353       1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/2w6 326\nI0211 00:48:15.268405       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/m47 219\nI0211 00:48:15.468493       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/k2r 492\nI0211 00:48:15.668409       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/kxv 558\nI0211 00:48:15.868327       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/684k 574\nI0211 00:48:16.068333       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/wq84 336\nI0211 00:48:16.268490       1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/x4vj 548\nI0211 00:48:16.468365       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/7q46 257\nI0211 00:48:16.668378       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/zlk 521\nI0211 00:48:16.868379       1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4d6q 443\nI0211 00:48:17.068399       1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/tdd 396\nI0211 00:48:17.268157       1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/f2h 373\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472
Feb 11 00:48:17.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7100'
Feb 11 00:48:32.358: INFO: stderr: ""
Feb 11 00:48:32.358: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:48:32.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7100" for this suite.

• [SLOW TEST:27.669 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":280,"completed":170,"skipped":2778,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:48:32.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-a3ad63a4-5d31-48c3-9fc0-cb1084055d28
STEP: Creating configMap with name cm-test-opt-upd-33ba084c-37ce-4686-b8ff-1afdf21a6a08
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a3ad63a4-5d31-48c3-9fc0-cb1084055d28
STEP: Updating configmap cm-test-opt-upd-33ba084c-37ce-4686-b8ff-1afdf21a6a08
STEP: Creating configMap with name cm-test-opt-create-0b951692-fbfe-469b-9013-09d283c46be7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:48:48.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6434" for this suite.

• [SLOW TEST:16.377 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2788,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:48:48.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 11 00:48:48.881: INFO: Waiting up to 5m0s for pod "pod-7302b855-d7d5-4881-9561-48afb63ec841" in namespace "emptydir-7911" to be "success or failure"
Feb 11 00:48:48.939: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Pending", Reason="", readiness=false. Elapsed: 57.611782ms
Feb 11 00:48:51.892: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Pending", Reason="", readiness=false. Elapsed: 3.010482759s
Feb 11 00:48:53.987: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Pending", Reason="", readiness=false. Elapsed: 5.105521045s
Feb 11 00:48:55.995: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Pending", Reason="", readiness=false. Elapsed: 7.113278118s
Feb 11 00:48:58.003: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Pending", Reason="", readiness=false. Elapsed: 9.121510894s
Feb 11 00:49:00.011: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.129566124s
STEP: Saw pod success
Feb 11 00:49:00.012: INFO: Pod "pod-7302b855-d7d5-4881-9561-48afb63ec841" satisfied condition "success or failure"
Feb 11 00:49:00.015: INFO: Trying to get logs from node jerma-node pod pod-7302b855-d7d5-4881-9561-48afb63ec841 container test-container: 
STEP: delete the pod
Feb 11 00:49:00.076: INFO: Waiting for pod pod-7302b855-d7d5-4881-9561-48afb63ec841 to disappear
Feb 11 00:49:00.136: INFO: Pod pod-7302b855-d7d5-4881-9561-48afb63ec841 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:00.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7911" for this suite.

• [SLOW TEST:11.362 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":172,"skipped":2796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:00.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:49:00.408: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0" in namespace "security-context-test-1932" to be "success or failure"
Feb 11 00:49:00.455: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 46.547243ms
Feb 11 00:49:02.461: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052866366s
Feb 11 00:49:04.471: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063022827s
Feb 11 00:49:06.481: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073162819s
Feb 11 00:49:08.543: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134310068s
Feb 11 00:49:08.543: INFO: Pod "alpine-nnp-false-a63a7a6b-352c-4e83-a65f-62f56fd8f0e0" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:08.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1932" for this suite.

• [SLOW TEST:8.440 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":173,"skipped":2829,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:08.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:49:08.774: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:14.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8684" for this suite.

• [SLOW TEST:5.965 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":174,"skipped":2831,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:14.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:49:14.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8007'
Feb 11 00:49:15.231: INFO: stderr: ""
Feb 11 00:49:15.231: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 11 00:49:15.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8007'
Feb 11 00:49:15.611: INFO: stderr: ""
Feb 11 00:49:15.611: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 11 00:49:16.625: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:16.625: INFO: Found 0 / 1
Feb 11 00:49:17.622: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:17.622: INFO: Found 0 / 1
Feb 11 00:49:18.621: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:18.621: INFO: Found 0 / 1
Feb 11 00:49:19.619: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:19.619: INFO: Found 0 / 1
Feb 11 00:49:20.620: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:20.620: INFO: Found 0 / 1
Feb 11 00:49:21.618: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:21.618: INFO: Found 1 / 1
Feb 11 00:49:21.618: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 11 00:49:21.623: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 11 00:49:21.623: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 11 00:49:21.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-wgjcj --namespace=kubectl-8007'
Feb 11 00:49:21.807: INFO: stderr: ""
Feb 11 00:49:21.807: INFO: stdout: "Name:         agnhost-master-wgjcj\nNamespace:    kubectl-8007\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Tue, 11 Feb 2020 00:49:15 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://e6caa7188e634eb813143b5d3e90eb9dbca2148b17ae79614cf2eae1b2165a39\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 11 Feb 2020 00:49:20 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pxx65 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-pxx65:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-pxx65\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-8007/agnhost-master-wgjcj to jerma-node\n  Normal  Pulled     3s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 11 00:49:21.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8007'
Feb 11 00:49:21.971: INFO: stderr: ""
Feb 11 00:49:21.971: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-8007\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  6s    replication-controller  Created pod: agnhost-master-wgjcj\n"
Feb 11 00:49:21.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8007'
Feb 11 00:49:22.143: INFO: stderr: ""
Feb 11 00:49:22.143: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-8007\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.176.107\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 11 00:49:22.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 11 00:49:22.368: INFO: stderr: ""
Feb 11 00:49:22.368: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Tue, 11 Feb 2020 00:49:12 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 11 Feb 2020 00:44:40 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 11 Feb 2020 00:44:40 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 11 Feb 2020 00:44:40 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 11 Feb 2020 00:44:40 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         37d\n  kubectl-8007                agnhost-master-wgjcj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 11 00:49:22.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8007'
Feb 11 00:49:22.522: INFO: stderr: ""
Feb 11 00:49:22.522: INFO: stdout: "Name:         kubectl-8007\nLabels:       e2e-framework=kubectl\n              e2e-run=926c12fe-a8e8-47b4-bf1c-6765a596be64\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:22.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8007" for this suite.

• [SLOW TEST:7.970 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":175,"skipped":2848,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:22.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:22.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6822" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":176,"skipped":2866,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:22.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 11 00:49:22.830: INFO: Waiting up to 5m0s for pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8" in namespace "emptydir-7345" to be "success or failure"
Feb 11 00:49:22.856: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.513457ms
Feb 11 00:49:24.870: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04026721s
Feb 11 00:49:26.882: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052479756s
Feb 11 00:49:28.890: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059896018s
Feb 11 00:49:30.902: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071807081s
Feb 11 00:49:32.907: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077378294s
STEP: Saw pod success
Feb 11 00:49:32.907: INFO: Pod "pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8" satisfied condition "success or failure"
Feb 11 00:49:32.909: INFO: Trying to get logs from node jerma-node pod pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8 container test-container: 
STEP: delete the pod
Feb 11 00:49:32.966: INFO: Waiting for pod pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8 to disappear
Feb 11 00:49:32.972: INFO: Pod pod-37b41499-9b8c-4d5c-ac38-5a2cd932cbf8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:32.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7345" for this suite.

• [SLOW TEST:10.308 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2873,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:32.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 11 00:49:42.481: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:49:43.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7219" for this suite.

• [SLOW TEST:10.556 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":178,"skipped":2902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:49:43.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 11 00:50:04.560: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 00:50:04.619: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 00:50:06.619: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 00:50:06.629: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 00:50:08.619: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 00:50:08.623: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 00:50:10.619: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 00:50:10.627: INFO: Pod pod-with-prestop-http-hook still exists
Feb 11 00:50:12.619: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 11 00:50:12.628: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:50:12.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5321" for this suite.

• [SLOW TEST:29.126 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2962,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:50:12.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 11 00:50:12.821: INFO: Waiting up to 5m0s for pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924" in namespace "downward-api-4747" to be "success or failure"
Feb 11 00:50:12.832: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924": Phase="Pending", Reason="", readiness=false. Elapsed: 10.005658ms
Feb 11 00:50:14.839: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017432773s
Feb 11 00:50:16.850: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02785102s
Feb 11 00:50:18.862: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040666531s
Feb 11 00:50:20.871: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048715183s
STEP: Saw pod success
Feb 11 00:50:20.871: INFO: Pod "downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924" satisfied condition "success or failure"
Feb 11 00:50:20.873: INFO: Trying to get logs from node jerma-node pod downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924 container dapi-container: 
STEP: delete the pod
Feb 11 00:50:20.920: INFO: Waiting for pod downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924 to disappear
Feb 11 00:50:20.927: INFO: Pod downward-api-883f068e-08b1-4b3d-8cc7-9aedf2fcf924 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:50:20.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4747" for this suite.

• [SLOW TEST:8.331 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":180,"skipped":2974,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:50:21.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:50:32.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3050" for this suite.

• [SLOW TEST:11.187 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":181,"skipped":2988,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:50:32.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-6676e131-f554-479e-a360-e0cc2b7adb44
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-6676e131-f554-479e-a360-e0cc2b7adb44
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:50:44.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3378" for this suite.

• [SLOW TEST:12.238 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":182,"skipped":2995,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:50:44.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-00455a50-8f59-4967-a75f-14deccba241e in namespace container-probe-8187
Feb 11 00:50:52.583: INFO: Started pod busybox-00455a50-8f59-4967-a75f-14deccba241e in namespace container-probe-8187
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 00:50:52.587: INFO: Initial restart count of pod busybox-00455a50-8f59-4967-a75f-14deccba241e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:54:52.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8187" for this suite.

• [SLOW TEST:248.511 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":183,"skipped":2999,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:54:52.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 00:54:53.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2" in namespace "downward-api-6812" to be "success or failure"
Feb 11 00:54:53.134: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.510554ms
Feb 11 00:54:55.140: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033332551s
Feb 11 00:54:57.169: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0624203s
Feb 11 00:54:59.175: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068245867s
Feb 11 00:55:01.182: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075013519s
STEP: Saw pod success
Feb 11 00:55:01.182: INFO: Pod "downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2" satisfied condition "success or failure"
Feb 11 00:55:01.195: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2 container client-container: 
STEP: delete the pod
Feb 11 00:55:01.267: INFO: Waiting for pod downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2 to disappear
Feb 11 00:55:01.316: INFO: Pod downwardapi-volume-bbef51b5-d691-44df-9956-84ebca4815a2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:55:01.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6812" for this suite.

• [SLOW TEST:8.389 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":3008,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:55:01.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 00:55:02.088: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 00:55:04.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:55:06.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:55:08.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 00:55:10.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979302, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 00:55:13.152: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 00:55:13.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3609-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:55:14.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7807" for this suite.
STEP: Destroying namespace "webhook-7807-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.458 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":185,"skipped":3031,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:55:14.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Feb 11 00:55:14.934: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb 11 00:55:14.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:15.467: INFO: stderr: ""
Feb 11 00:55:15.467: INFO: stdout: "service/agnhost-slave created\n"
Feb 11 00:55:15.468: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb 11 00:55:15.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:16.072: INFO: stderr: ""
Feb 11 00:55:16.073: INFO: stdout: "service/agnhost-master created\n"
Feb 11 00:55:16.074: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 11 00:55:16.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:16.627: INFO: stderr: ""
Feb 11 00:55:16.627: INFO: stdout: "service/frontend created\n"
Feb 11 00:55:16.628: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb 11 00:55:16.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:17.142: INFO: stderr: ""
Feb 11 00:55:17.142: INFO: stdout: "deployment.apps/frontend created\n"
Feb 11 00:55:17.143: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 11 00:55:17.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:17.625: INFO: stderr: ""
Feb 11 00:55:17.625: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb 11 00:55:17.626: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 11 00:55:17.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1453'
Feb 11 00:55:18.547: INFO: stderr: ""
Feb 11 00:55:18.547: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb 11 00:55:18.547: INFO: Waiting for all frontend pods to be Running.
Feb 11 00:55:38.600: INFO: Waiting for frontend to serve content.
Feb 11 00:55:38.627: INFO: Trying to add a new entry to the guestbook.
Feb 11 00:55:38.651: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:55:43.682: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:55:48.697: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:55:53.729: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:55:58.756: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:03.779: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:08.800: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:13.819: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:18.844: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:23.910: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:28.930: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:33.963: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:38.988: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:44.026: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:49.043: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:54.064: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:56:59.081: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:04.099: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:09.123: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:14.139: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:19.163: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:24.183: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:29.202: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:34.225: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:39.249: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:44.277: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:49.310: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:57:55.122: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:00.141: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:05.175: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:10.201: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:15.219: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:20.236: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:25.286: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:30.316: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:35.338: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 11 00:58:40.339: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc0039b4580, 0xc0022e5660, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0018e4000)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc0018e4000)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc0018e4000, 0x4c9f938)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Feb 11 00:58:40.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:42.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:42.981: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 00:58:42.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:43.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:43.210: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 00:58:43.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:43.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:43.385: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 00:58:43.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:43.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:43.573: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 00:58:43.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:43.801: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:43.802: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 11 00:58:43.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1453'
Feb 11 00:58:44.244: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 00:58:44.244: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "kubectl-1453".
STEP: Found 37 events.
Feb 11 00:58:44.254: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-ngr65: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/agnhost-master-74c46fb7d4-ngr65 to jerma-node
Feb 11 00:58:44.254: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-2zgtv: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/agnhost-slave-774cfc759f-2zgtv to jerma-node
Feb 11 00:58:44.254: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-f24gb: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/agnhost-slave-774cfc759f-f24gb to jerma-server-mvvl6gufaqub
Feb 11 00:58:44.255: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-4hclm: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/frontend-6c5f89d5d4-4hclm to jerma-node
Feb 11 00:58:44.255: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-64pxz: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/frontend-6c5f89d5d4-64pxz to jerma-node
Feb 11 00:58:44.255: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-d98wr: {default-scheduler } Scheduled: Successfully assigned kubectl-1453/frontend-6c5f89d5d4-d98wr to jerma-server-mvvl6gufaqub
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-ngr65
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-4hclm
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-64pxz
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:17 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-d98wr
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:19 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-f24gb
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:19 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-2zgtv
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:26 +0000 UTC - event for agnhost-slave-774cfc759f-2zgtv: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:26 +0000 UTC - event for frontend-6c5f89d5d4-d98wr: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:27 +0000 UTC - event for frontend-6c5f89d5d4-4hclm: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:27 +0000 UTC - event for frontend-6c5f89d5d4-64pxz: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:28 +0000 UTC - event for agnhost-slave-774cfc759f-f24gb: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:30 +0000 UTC - event for agnhost-master-74c46fb7d4-ngr65: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:32 +0000 UTC - event for agnhost-slave-774cfc759f-f24gb: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:32 +0000 UTC - event for frontend-6c5f89d5d4-d98wr: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:33 +0000 UTC - event for agnhost-slave-774cfc759f-f24gb: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:33 +0000 UTC - event for frontend-6c5f89d5d4-d98wr: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:34 +0000 UTC - event for agnhost-master-74c46fb7d4-ngr65: {kubelet jerma-node} Created: Created container master
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:34 +0000 UTC - event for agnhost-slave-774cfc759f-2zgtv: {kubelet jerma-node} Created: Created container slave
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:34 +0000 UTC - event for frontend-6c5f89d5d4-4hclm: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:34 +0000 UTC - event for frontend-6c5f89d5d4-64pxz: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:35 +0000 UTC - event for agnhost-master-74c46fb7d4-ngr65: {kubelet jerma-node} Started: Started container master
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:35 +0000 UTC - event for agnhost-slave-774cfc759f-2zgtv: {kubelet jerma-node} Started: Started container slave
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:35 +0000 UTC - event for frontend-6c5f89d5d4-4hclm: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:55:35 +0000 UTC - event for frontend-6c5f89d5d4-64pxz: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:58:43 +0000 UTC - event for agnhost-master-74c46fb7d4-ngr65: {kubelet jerma-node} Killing: Stopping container master
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:58:43 +0000 UTC - event for frontend-6c5f89d5d4-4hclm: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:58:43 +0000 UTC - event for frontend-6c5f89d5d4-64pxz: {kubelet jerma-node} Killing: Stopping container guestbook-frontend
Feb 11 00:58:44.255: INFO: At 2020-02-11 00:58:43 +0000 UTC - event for frontend-6c5f89d5d4-d98wr: {kubelet jerma-server-mvvl6gufaqub} Killing: Stopping container guestbook-frontend
Feb 11 00:58:44.309: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Feb 11 00:58:44.309: INFO: agnhost-master-74c46fb7d4-ngr65  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: agnhost-slave-774cfc759f-2zgtv   jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:19 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: agnhost-slave-774cfc759f-f24gb   jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:19 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: frontend-6c5f89d5d4-4hclm        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:18 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: frontend-6c5f89d5d4-64pxz        jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: frontend-6c5f89d5d4-d98wr        jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 00:55:17 +0000 UTC  }]
Feb 11 00:58:44.310: INFO: 
Feb 11 00:58:44.365: INFO: 
Logging node info for node jerma-node
Feb 11 00:58:44.375: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 7647281 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-11 00:54:41 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-11 00:54:41 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-11 00:54:41 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-11 00:54:41 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 11 00:58:44.376: INFO: 
Logging kubelet events for node jerma-node
Feb 11 00:58:44.381: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 11 00:58:44.408: INFO: agnhost-master-74c46fb7d4-ngr65 started at 2020-02-11 00:55:18 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container master ready: true, restart count 0
Feb 11 00:58:44.408: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 00:58:44.408: INFO: frontend-6c5f89d5d4-64pxz started at 2020-02-11 00:55:17 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 11 00:58:44.408: INFO: agnhost-slave-774cfc759f-2zgtv started at 2020-02-11 00:55:20 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container slave ready: true, restart count 0
Feb 11 00:58:44.408: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container weave ready: true, restart count 1
Feb 11 00:58:44.408: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 00:58:44.408: INFO: frontend-6c5f89d5d4-4hclm started at 2020-02-11 00:55:18 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:44.408: INFO: 	Container guestbook-frontend ready: true, restart count 0
W0211 00:58:45.068851       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 00:58:45.166: INFO: 
Latency metrics for node jerma-node
Feb 11 00:58:45.166: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 11 00:58:45.341: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 7647754 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-11 00:56:39 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-11 00:56:39 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-11 00:56:39 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-11 00:56:39 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 11 00:58:45.342: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 11 00:58:45.347: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 11 00:58:45.381: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 11 00:58:45.381: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container etcd ready: true, restart count 1
Feb 11 00:58:45.381: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container coredns ready: true, restart count 0
Feb 11 00:58:45.381: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container coredns ready: true, restart count 0
Feb 11 00:58:45.381: INFO: agnhost-slave-774cfc759f-f24gb started at 2020-02-11 00:55:19 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container slave ready: true, restart count 0
Feb 11 00:58:45.381: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container kube-controller-manager ready: true, restart count 5
Feb 11 00:58:45.381: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 00:58:45.381: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container weave ready: true, restart count 0
Feb 11 00:58:45.381: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 00:58:45.381: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container kube-scheduler ready: true, restart count 7
Feb 11 00:58:45.381: INFO: frontend-6c5f89d5d4-d98wr started at 2020-02-11 00:55:17 +0000 UTC (0+1 container statuses recorded)
Feb 11 00:58:45.381: INFO: 	Container guestbook-frontend ready: true, restart count 0
W0211 00:58:45.388436       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 00:58:45.464: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 11 00:58:45.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1453" for this suite.

• Failure [210.688 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685

    Feb 11 00:58:40.339: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":280,"completed":185,"skipped":3056,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:58:45.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7720.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7720.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7720.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7720.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7720.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 00:59:01.985: INFO: DNS probes using dns-7720/dns-test-3e1bedeb-f4e9-4f78-9d45-0198858144b1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:59:02.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7720" for this suite.

• [SLOW TEST:16.633 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":186,"skipped":3064,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:59:02.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:59:20.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2126" for this suite.

• [SLOW TEST:18.448 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":187,"skipped":3105,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:59:20.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 11 00:59:29.277: INFO: Successfully updated pod "annotationupdate328a5551-5021-4d3c-9d38-c53944d7fb11"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:59:31.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9663" for this suite.

• [SLOW TEST:10.789 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":3110,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:59:31.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 11 00:59:47.563: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 00:59:47.582: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 00:59:49.582: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 00:59:49.587: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 00:59:51.582: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 00:59:51.776: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 11 00:59:53.582: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 11 00:59:53.590: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 00:59:53.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1731" for this suite.

• [SLOW TEST:22.247 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":189,"skipped":3122,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 00:59:53.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Feb 11 00:59:54.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8624'
Feb 11 00:59:54.508: INFO: stderr: ""
Feb 11 00:59:54.508: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 00:59:54.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8624'
Feb 11 00:59:54.745: INFO: stderr: ""
Feb 11 00:59:54.745: INFO: stdout: "update-demo-nautilus-ccdf8 update-demo-nautilus-g4t48 "
Feb 11 00:59:54.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccdf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 00:59:54.868: INFO: stderr: ""
Feb 11 00:59:54.868: INFO: stdout: ""
Feb 11 00:59:54.868: INFO: update-demo-nautilus-ccdf8 is created but not running
Feb 11 00:59:59.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8624'
Feb 11 01:00:01.529: INFO: stderr: ""
Feb 11 01:00:01.529: INFO: stdout: "update-demo-nautilus-ccdf8 update-demo-nautilus-g4t48 "
Feb 11 01:00:01.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccdf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:01.882: INFO: stderr: ""
Feb 11 01:00:01.882: INFO: stdout: ""
Feb 11 01:00:01.882: INFO: update-demo-nautilus-ccdf8 is created but not running
Feb 11 01:00:06.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8624'
Feb 11 01:00:07.102: INFO: stderr: ""
Feb 11 01:00:07.103: INFO: stdout: "update-demo-nautilus-ccdf8 update-demo-nautilus-g4t48 "
Feb 11 01:00:07.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccdf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:07.185: INFO: stderr: ""
Feb 11 01:00:07.185: INFO: stdout: "true"
Feb 11 01:00:07.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ccdf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:07.327: INFO: stderr: ""
Feb 11 01:00:07.328: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:00:07.328: INFO: validating pod update-demo-nautilus-ccdf8
Feb 11 01:00:07.354: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:00:07.354: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:00:07.354: INFO: update-demo-nautilus-ccdf8 is verified up and running
Feb 11 01:00:07.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4t48 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:07.471: INFO: stderr: ""
Feb 11 01:00:07.471: INFO: stdout: "true"
Feb 11 01:00:07.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g4t48 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:07.578: INFO: stderr: ""
Feb 11 01:00:07.578: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:00:07.578: INFO: validating pod update-demo-nautilus-g4t48
Feb 11 01:00:07.598: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:00:07.598: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:00:07.598: INFO: update-demo-nautilus-g4t48 is verified up and running
STEP: rolling-update to new replication controller
Feb 11 01:00:07.602: INFO: scanned /root for discovery docs: 
Feb 11 01:00:07.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8624'
Feb 11 01:00:37.657: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 11 01:00:37.657: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 01:00:37.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8624'
Feb 11 01:00:37.843: INFO: stderr: ""
Feb 11 01:00:37.843: INFO: stdout: "update-demo-kitten-dhl59 update-demo-kitten-gb79j "
Feb 11 01:00:37.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dhl59 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:37.966: INFO: stderr: ""
Feb 11 01:00:37.966: INFO: stdout: "true"
Feb 11 01:00:37.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dhl59 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:38.084: INFO: stderr: ""
Feb 11 01:00:38.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 11 01:00:38.085: INFO: validating pod update-demo-kitten-dhl59
Feb 11 01:00:38.092: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 11 01:00:38.093: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 11 01:00:38.093: INFO: update-demo-kitten-dhl59 is verified up and running
Feb 11 01:00:38.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gb79j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:38.185: INFO: stderr: ""
Feb 11 01:00:38.185: INFO: stdout: "true"
Feb 11 01:00:38.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gb79j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8624'
Feb 11 01:00:38.301: INFO: stderr: ""
Feb 11 01:00:38.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 11 01:00:38.301: INFO: validating pod update-demo-kitten-gb79j
Feb 11 01:00:38.321: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 11 01:00:38.321: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 11 01:00:38.321: INFO: update-demo-kitten-gb79j is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:00:38.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8624" for this suite.

• [SLOW TEST:44.726 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":190,"skipped":3128,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:00:38.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 01:00:49.359: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:00:49.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1654" for this suite.

• [SLOW TEST:11.112 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":3136,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:00:49.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-972bf6bb-b0c2-4eb2-a540-d2832c03368e
STEP: Creating a pod to test consume secrets
Feb 11 01:00:49.741: INFO: Waiting up to 5m0s for pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c" in namespace "secrets-9446" to be "success or failure"
Feb 11 01:00:49.797: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.583403ms
Feb 11 01:00:51.818: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07708593s
Feb 11 01:00:53.828: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086903173s
Feb 11 01:00:55.834: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093395106s
Feb 11 01:00:57.840: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099106219s
STEP: Saw pod success
Feb 11 01:00:57.840: INFO: Pod "pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c" satisfied condition "success or failure"
Feb 11 01:00:57.844: INFO: Trying to get logs from node jerma-node pod pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c container secret-volume-test: 
STEP: delete the pod
Feb 11 01:00:57.982: INFO: Waiting for pod pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c to disappear
Feb 11 01:00:57.994: INFO: Pod pod-secrets-0afb24a8-0649-48da-ad98-07a04c39861c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:00:57.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9446" for this suite.

• [SLOW TEST:8.564 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3182,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:00:58.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Feb 11 01:00:58.173: INFO: Waiting up to 5m0s for pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213" in namespace "containers-7492" to be "success or failure"
Feb 11 01:00:58.226: INFO: Pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213": Phase="Pending", Reason="", readiness=false. Elapsed: 53.428618ms
Feb 11 01:01:00.234: INFO: Pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061628538s
Feb 11 01:01:02.240: INFO: Pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06763612s
Feb 11 01:01:04.249: INFO: Pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075892525s
STEP: Saw pod success
Feb 11 01:01:04.249: INFO: Pod "client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213" satisfied condition "success or failure"
Feb 11 01:01:04.252: INFO: Trying to get logs from node jerma-node pod client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213 container test-container: 
STEP: delete the pod
Feb 11 01:01:04.302: INFO: Waiting for pod client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213 to disappear
Feb 11 01:01:04.315: INFO: Pod client-containers-4ee8720b-7f7d-450c-b9a0-0090b10a5213 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:01:04.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7492" for this suite.

• [SLOW TEST:6.311 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":3197,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:01:04.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 11 01:01:04.429: INFO: >>> kubeConfig: /root/.kube/config
Feb 11 01:01:07.365: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:01:18.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3681" for this suite.

• [SLOW TEST:13.785 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":194,"skipped":3249,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:01:18.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:01:18.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:01:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8468" for this suite.

• [SLOW TEST:8.202 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3250,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:01:26.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0211 01:01:42.747453       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 01:01:42.747: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:01:42.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9543" for this suite.

• [SLOW TEST:17.209 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":196,"skipped":3252,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:01:43.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:01:54.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6285" for this suite.

• [SLOW TEST:11.237 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":197,"skipped":3258,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:01:54.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:01:56.370: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"efeabdd5-98ba-461c-ac62-72cbbcf6e42c", Controller:(*bool)(0xc00568725a), BlockOwnerDeletion:(*bool)(0xc00568725b)}}
Feb 11 01:01:56.391: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"55bb15f0-f395-46b3-88bd-a06bfce310e2", Controller:(*bool)(0xc00564d34a), BlockOwnerDeletion:(*bool)(0xc00564d34b)}}
Feb 11 01:01:56.408: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3c8e2e28-1ec8-4971-8dc1-73367391cd9b", Controller:(*bool)(0xc00568741a), BlockOwnerDeletion:(*bool)(0xc00568741b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8909" for this suite.

• [SLOW TEST:6.797 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":198,"skipped":3280,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:01.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 11 01:02:01.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:19.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-799" for this suite.

• [SLOW TEST:17.585 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":199,"skipped":3295,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:19.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-90d6bf4c-f456-4562-af7b-92314369aaf5
STEP: Creating a pod to test consume configMaps
Feb 11 01:02:19.313: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398" in namespace "configmap-3405" to be "success or failure"
Feb 11 01:02:19.324: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398": Phase="Pending", Reason="", readiness=false. Elapsed: 10.979606ms
Feb 11 01:02:21.335: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022011652s
Feb 11 01:02:23.345: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03195816s
Feb 11 01:02:25.354: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040844575s
Feb 11 01:02:27.361: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047560981s
STEP: Saw pod success
Feb 11 01:02:27.361: INFO: Pod "pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398" satisfied condition "success or failure"
Feb 11 01:02:27.366: INFO: Trying to get logs from node jerma-node pod pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398 container configmap-volume-test: 
STEP: delete the pod
Feb 11 01:02:27.493: INFO: Waiting for pod pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398 to disappear
Feb 11 01:02:27.503: INFO: Pod pod-configmaps-3c4eba05-e270-4a0f-8459-792257e6f398 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:27.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3405" for this suite.

• [SLOW TEST:8.368 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3307,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:27.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:02:27.624: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 11 01:02:29.935: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:30.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9262" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":201,"skipped":3316,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:30.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:02:31.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292" in namespace "downward-api-8462" to be "success or failure"
Feb 11 01:02:31.577: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 13.25936ms
Feb 11 01:02:34.275: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711203507s
Feb 11 01:02:36.601: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 5.037106699s
Feb 11 01:02:38.813: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 7.249368118s
Feb 11 01:02:40.821: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 9.257614678s
Feb 11 01:02:42.830: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Pending", Reason="", readiness=false. Elapsed: 11.265938499s
Feb 11 01:02:44.837: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.273603569s
STEP: Saw pod success
Feb 11 01:02:44.838: INFO: Pod "downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292" satisfied condition "success or failure"
Feb 11 01:02:44.842: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292 container client-container: 
STEP: delete the pod
Feb 11 01:02:44.917: INFO: Waiting for pod downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292 to disappear
Feb 11 01:02:44.922: INFO: Pod downwardapi-volume-0a895123-c745-4a56-9496-ebfce48c9292 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:44.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8462" for this suite.

• [SLOW TEST:13.962 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3348,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:44.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:02:45.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7" in namespace "projected-1303" to be "success or failure"
Feb 11 01:02:45.088: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.567494ms
Feb 11 01:02:47.100: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017040277s
Feb 11 01:02:49.110: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027587799s
Feb 11 01:02:51.119: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035888041s
Feb 11 01:02:53.129: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046042777s
STEP: Saw pod success
Feb 11 01:02:53.129: INFO: Pod "downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7" satisfied condition "success or failure"
Feb 11 01:02:53.134: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7 container client-container: 
STEP: delete the pod
Feb 11 01:02:53.283: INFO: Waiting for pod downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7 to disappear
Feb 11 01:02:53.325: INFO: Pod downwardapi-volume-63675964-bb6d-4045-8f92-a53ae039eff7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:02:53.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1303" for this suite.

• [SLOW TEST:8.393 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":203,"skipped":3362,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:02:53.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 01:02:53.951: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 01:02:55.974: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979774, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:02:58.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979774, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:02:59.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979774, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716979773, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 01:03:03.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Feb 11 01:03:09.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4493 to-be-attached-pod -i -c=container1'
Feb 11 01:03:09.251: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:03:09.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4493" for this suite.
STEP: Destroying namespace "webhook-4493-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:16.132 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":204,"skipped":3379,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:03:09.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2463
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-2463
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2463
Feb 11 01:03:09.570: INFO: Found 0 stateful pods, waiting for 1
Feb 11 01:03:19.580: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 11 01:03:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:03:20.121: INFO: stderr: "I0211 01:03:19.792487    2529 log.go:172] (0xc0009f0fd0) (0xc000afc320) Create stream\nI0211 01:03:19.792717    2529 log.go:172] (0xc0009f0fd0) (0xc000afc320) Stream added, broadcasting: 1\nI0211 01:03:19.801607    2529 log.go:172] (0xc0009f0fd0) Reply frame received for 1\nI0211 01:03:19.801697    2529 log.go:172] (0xc0009f0fd0) (0xc000afc3c0) Create stream\nI0211 01:03:19.801709    2529 log.go:172] (0xc0009f0fd0) (0xc000afc3c0) Stream added, broadcasting: 3\nI0211 01:03:19.803449    2529 log.go:172] (0xc0009f0fd0) Reply frame received for 3\nI0211 01:03:19.803487    2529 log.go:172] (0xc0009f0fd0) (0xc0009f4280) Create stream\nI0211 01:03:19.803500    2529 log.go:172] (0xc0009f0fd0) (0xc0009f4280) Stream added, broadcasting: 5\nI0211 01:03:19.805258    2529 log.go:172] (0xc0009f0fd0) Reply frame received for 5\nI0211 01:03:19.905727    2529 log.go:172] (0xc0009f0fd0) Data frame received for 5\nI0211 01:03:19.905824    2529 log.go:172] (0xc0009f4280) (5) Data frame handling\nI0211 01:03:19.905854    2529 log.go:172] (0xc0009f4280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:03:20.002668    2529 log.go:172] (0xc0009f0fd0) Data frame received for 3\nI0211 01:03:20.002740    2529 log.go:172] (0xc000afc3c0) (3) Data frame handling\nI0211 01:03:20.002758    2529 log.go:172] (0xc000afc3c0) (3) Data frame sent\nI0211 01:03:20.108414    2529 log.go:172] (0xc0009f0fd0) Data frame received for 1\nI0211 01:03:20.108603    2529 log.go:172] (0xc000afc320) (1) Data frame handling\nI0211 01:03:20.108672    2529 log.go:172] (0xc000afc320) (1) Data frame sent\nI0211 01:03:20.109942    2529 log.go:172] (0xc0009f0fd0) (0xc000afc320) Stream removed, broadcasting: 1\nI0211 01:03:20.110770    2529 log.go:172] (0xc0009f0fd0) (0xc000afc3c0) Stream removed, broadcasting: 3\nI0211 01:03:20.110968    2529 log.go:172] (0xc0009f0fd0) (0xc0009f4280) Stream removed, broadcasting: 5\nI0211 01:03:20.111011    2529 log.go:172] (0xc0009f0fd0) Go away received\nI0211 01:03:20.111212    2529 log.go:172] (0xc0009f0fd0) (0xc000afc320) Stream removed, broadcasting: 1\nI0211 01:03:20.111237    2529 log.go:172] (0xc0009f0fd0) (0xc000afc3c0) Stream removed, broadcasting: 3\nI0211 01:03:20.111246    2529 log.go:172] (0xc0009f0fd0) (0xc0009f4280) Stream removed, broadcasting: 5\n"
Feb 11 01:03:20.122: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:03:20.122: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:03:20.127: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 11 01:03:30.133: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:03:30.133: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:03:30.164: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 01:03:30.164: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:30.164: INFO: 
Feb 11 01:03:30.164: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 11 01:03:32.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994908435s
Feb 11 01:03:33.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.109915509s
Feb 11 01:03:34.129: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.058040146s
Feb 11 01:03:35.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.030056818s
Feb 11 01:03:36.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.024979281s
Feb 11 01:03:37.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.830241816s
Feb 11 01:03:38.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.710647532s
Feb 11 01:03:39.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 705.491572ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2463
Feb 11 01:03:40.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:03:40.998: INFO: stderr: "I0211 01:03:40.812414    2549 log.go:172] (0xc00075e9a0) (0xc00075a000) Create stream\nI0211 01:03:40.812521    2549 log.go:172] (0xc00075e9a0) (0xc00075a000) Stream added, broadcasting: 1\nI0211 01:03:40.816363    2549 log.go:172] (0xc00075e9a0) Reply frame received for 1\nI0211 01:03:40.816424    2549 log.go:172] (0xc00075e9a0) (0xc000675b80) Create stream\nI0211 01:03:40.816435    2549 log.go:172] (0xc00075e9a0) (0xc000675b80) Stream added, broadcasting: 3\nI0211 01:03:40.822250    2549 log.go:172] (0xc00075e9a0) Reply frame received for 3\nI0211 01:03:40.822297    2549 log.go:172] (0xc00075e9a0) (0xc00075a140) Create stream\nI0211 01:03:40.822308    2549 log.go:172] (0xc00075e9a0) (0xc00075a140) Stream added, broadcasting: 5\nI0211 01:03:40.824028    2549 log.go:172] (0xc00075e9a0) Reply frame received for 5\nI0211 01:03:40.884694    2549 log.go:172] (0xc00075e9a0) Data frame received for 3\nI0211 01:03:40.884723    2549 log.go:172] (0xc000675b80) (3) Data frame handling\nI0211 01:03:40.884743    2549 log.go:172] (0xc00075e9a0) Data frame received for 5\nI0211 01:03:40.884795    2549 log.go:172] (0xc00075a140) (5) Data frame handling\nI0211 01:03:40.884809    2549 log.go:172] (0xc00075a140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:03:40.884825    2549 log.go:172] (0xc000675b80) (3) Data frame sent\nI0211 01:03:40.987961    2549 log.go:172] (0xc00075e9a0) (0xc000675b80) Stream removed, broadcasting: 3\nI0211 01:03:40.988146    2549 log.go:172] (0xc00075e9a0) Data frame received for 1\nI0211 01:03:40.988162    2549 log.go:172] (0xc00075a000) (1) Data frame handling\nI0211 01:03:40.988178    2549 log.go:172] (0xc00075a000) (1) Data frame sent\nI0211 01:03:40.988184    2549 log.go:172] (0xc00075e9a0) (0xc00075a000) Stream removed, broadcasting: 1\nI0211 01:03:40.988770    2549 log.go:172] (0xc00075e9a0) (0xc00075a140) Stream removed, broadcasting: 5\nI0211 01:03:40.988811    2549 log.go:172] (0xc00075e9a0) (0xc00075a000) Stream removed, broadcasting: 1\nI0211 01:03:40.988820    2549 log.go:172] (0xc00075e9a0) (0xc000675b80) Stream removed, broadcasting: 3\nI0211 01:03:40.988826    2549 log.go:172] (0xc00075e9a0) (0xc00075a140) Stream removed, broadcasting: 5\nI0211 01:03:40.989089    2549 log.go:172] (0xc00075e9a0) Go away received\n"
Feb 11 01:03:40.998: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:03:40.998: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:03:40.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:03:41.408: INFO: stderr: "I0211 01:03:41.142790    2570 log.go:172] (0xc000a90000) (0xc00073a000) Create stream\nI0211 01:03:41.143104    2570 log.go:172] (0xc000a90000) (0xc00073a000) Stream added, broadcasting: 1\nI0211 01:03:41.147129    2570 log.go:172] (0xc000a90000) Reply frame received for 1\nI0211 01:03:41.147189    2570 log.go:172] (0xc000a90000) (0xc00073a140) Create stream\nI0211 01:03:41.147202    2570 log.go:172] (0xc000a90000) (0xc00073a140) Stream added, broadcasting: 3\nI0211 01:03:41.148094    2570 log.go:172] (0xc000a90000) Reply frame received for 3\nI0211 01:03:41.148109    2570 log.go:172] (0xc000a90000) (0xc00073a1e0) Create stream\nI0211 01:03:41.148113    2570 log.go:172] (0xc000a90000) (0xc00073a1e0) Stream added, broadcasting: 5\nI0211 01:03:41.149158    2570 log.go:172] (0xc000a90000) Reply frame received for 5\nI0211 01:03:41.282106    2570 log.go:172] (0xc000a90000) Data frame received for 5\nI0211 01:03:41.282268    2570 log.go:172] (0xc00073a1e0) (5) Data frame handling\nI0211 01:03:41.282333    2570 log.go:172] (0xc00073a1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:03:41.320472    2570 log.go:172] (0xc000a90000) Data frame received for 5\nI0211 01:03:41.320534    2570 log.go:172] (0xc00073a1e0) (5) Data frame handling\nI0211 01:03:41.320559    2570 log.go:172] (0xc00073a1e0) (5) Data frame sent\nI0211 01:03:41.320563    2570 log.go:172] (0xc000a90000) Data frame received for 5\nI0211 01:03:41.320567    2570 log.go:172] (0xc00073a1e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0211 01:03:41.320621    2570 log.go:172] (0xc000a90000) Data frame received for 3\nI0211 01:03:41.320669    2570 log.go:172] (0xc00073a140) (3) Data frame handling\nI0211 01:03:41.320684    2570 log.go:172] (0xc00073a140) (3) Data frame sent\nI0211 01:03:41.320829    2570 log.go:172] (0xc00073a1e0) (5) Data frame sent\nI0211 01:03:41.399380    2570 log.go:172] (0xc000a90000) Data frame received for 1\nI0211 01:03:41.399431    2570 log.go:172] (0xc00073a000) (1) Data frame handling\nI0211 01:03:41.399446    2570 log.go:172] (0xc00073a000) (1) Data frame sent\nI0211 01:03:41.399598    2570 log.go:172] (0xc000a90000) (0xc00073a000) Stream removed, broadcasting: 1\nI0211 01:03:41.399992    2570 log.go:172] (0xc000a90000) (0xc00073a140) Stream removed, broadcasting: 3\nI0211 01:03:41.400678    2570 log.go:172] (0xc000a90000) (0xc00073a1e0) Stream removed, broadcasting: 5\nI0211 01:03:41.400776    2570 log.go:172] (0xc000a90000) (0xc00073a000) Stream removed, broadcasting: 1\nI0211 01:03:41.400795    2570 log.go:172] (0xc000a90000) (0xc00073a140) Stream removed, broadcasting: 3\nI0211 01:03:41.400800    2570 log.go:172] (0xc000a90000) (0xc00073a1e0) Stream removed, broadcasting: 5\nI0211 01:03:41.401570    2570 log.go:172] (0xc000a90000) Go away received\n"
Feb 11 01:03:41.408: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:03:41.408: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:03:41.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:03:41.747: INFO: stderr: "I0211 01:03:41.590122    2590 log.go:172] (0xc000020fd0) (0xc000647f40) Create stream\nI0211 01:03:41.590264    2590 log.go:172] (0xc000020fd0) (0xc000647f40) Stream added, broadcasting: 1\nI0211 01:03:41.594874    2590 log.go:172] (0xc000020fd0) Reply frame received for 1\nI0211 01:03:41.594928    2590 log.go:172] (0xc000020fd0) (0xc0005fa820) Create stream\nI0211 01:03:41.594943    2590 log.go:172] (0xc000020fd0) (0xc0005fa820) Stream added, broadcasting: 3\nI0211 01:03:41.596245    2590 log.go:172] (0xc000020fd0) Reply frame received for 3\nI0211 01:03:41.596308    2590 log.go:172] (0xc000020fd0) (0xc0007554a0) Create stream\nI0211 01:03:41.596316    2590 log.go:172] (0xc000020fd0) (0xc0007554a0) Stream added, broadcasting: 5\nI0211 01:03:41.597702    2590 log.go:172] (0xc000020fd0) Reply frame received for 5\nI0211 01:03:41.673051    2590 log.go:172] (0xc000020fd0) Data frame received for 5\nI0211 01:03:41.673160    2590 log.go:172] (0xc0007554a0) (5) Data frame handling\nI0211 01:03:41.673249    2590 log.go:172] (0xc0007554a0) (5) Data frame sent\nI0211 01:03:41.673289    2590 log.go:172] (0xc000020fd0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0211 01:03:41.673313    2590 log.go:172] (0xc0005fa820) (3) Data frame handling\nI0211 01:03:41.673409    2590 log.go:172] (0xc0005fa820) (3) Data frame sent\nI0211 01:03:41.673720    2590 log.go:172] (0xc000020fd0) Data frame received for 5\nI0211 01:03:41.673731    2590 log.go:172] (0xc0007554a0) (5) Data frame handling\nI0211 01:03:41.673739    2590 log.go:172] (0xc0007554a0) (5) Data frame sent\n+ true\nI0211 01:03:41.737128    2590 log.go:172] (0xc000020fd0) Data frame received for 1\nI0211 01:03:41.737157    2590 log.go:172] (0xc000647f40) (1) Data frame handling\nI0211 01:03:41.737196    2590 log.go:172] (0xc000647f40) (1) Data frame sent\nI0211 01:03:41.737282    2590 log.go:172] (0xc000020fd0) (0xc000647f40) Stream removed, broadcasting: 1\nI0211 01:03:41.737552    2590 log.go:172] (0xc000020fd0) (0xc0007554a0) Stream removed, broadcasting: 5\nI0211 01:03:41.737596    2590 log.go:172] (0xc000020fd0) (0xc0005fa820) Stream removed, broadcasting: 3\nI0211 01:03:41.737737    2590 log.go:172] (0xc000020fd0) Go away received\nI0211 01:03:41.738446    2590 log.go:172] (0xc000020fd0) (0xc000647f40) Stream removed, broadcasting: 1\nI0211 01:03:41.738462    2590 log.go:172] (0xc000020fd0) (0xc0005fa820) Stream removed, broadcasting: 3\nI0211 01:03:41.738471    2590 log.go:172] (0xc000020fd0) (0xc0007554a0) Stream removed, broadcasting: 5\n"
Feb 11 01:03:41.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:03:41.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:03:41.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:03:41.755: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:03:41.755: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 11 01:03:41.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:03:42.054: INFO: stderr: "I0211 01:03:41.915348    2611 log.go:172] (0xc0009be630) (0xc00065bf40) Create stream\nI0211 01:03:41.915454    2611 log.go:172] (0xc0009be630) (0xc00065bf40) Stream added, broadcasting: 1\nI0211 01:03:41.922008    2611 log.go:172] (0xc0009be630) Reply frame received for 1\nI0211 01:03:41.922044    2611 log.go:172] (0xc0009be630) (0xc000620820) Create stream\nI0211 01:03:41.922050    2611 log.go:172] (0xc0009be630) (0xc000620820) Stream added, broadcasting: 3\nI0211 01:03:41.923187    2611 log.go:172] (0xc0009be630) Reply frame received for 3\nI0211 01:03:41.923205    2611 log.go:172] (0xc0009be630) (0xc00072d4a0) Create stream\nI0211 01:03:41.923211    2611 log.go:172] (0xc0009be630) (0xc00072d4a0) Stream added, broadcasting: 5\nI0211 01:03:41.924354    2611 log.go:172] (0xc0009be630) Reply frame received for 5\nI0211 01:03:41.980647    2611 log.go:172] (0xc0009be630) Data frame received for 3\nI0211 01:03:41.980694    2611 log.go:172] (0xc000620820) (3) Data frame handling\nI0211 01:03:41.980709    2611 log.go:172] (0xc000620820) (3) Data frame sent\nI0211 01:03:41.980746    2611 log.go:172] (0xc0009be630) Data frame received for 5\nI0211 01:03:41.980754    2611 log.go:172] (0xc00072d4a0) (5) Data frame handling\nI0211 01:03:41.980766    2611 log.go:172] (0xc00072d4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:03:42.048747    2611 log.go:172] (0xc0009be630) (0xc00072d4a0) Stream removed, broadcasting: 5\nI0211 01:03:42.048838    2611 log.go:172] (0xc0009be630) Data frame received for 1\nI0211 01:03:42.048857    2611 log.go:172] (0xc00065bf40) (1) Data frame handling\nI0211 01:03:42.048872    2611 log.go:172] (0xc00065bf40) (1) Data frame sent\nI0211 01:03:42.048895    2611 log.go:172] (0xc0009be630) (0xc00065bf40) Stream removed, broadcasting: 1\nI0211 01:03:42.049278    2611 log.go:172] (0xc0009be630) (0xc000620820) Stream removed, broadcasting: 3\nI0211 01:03:42.049303    2611 log.go:172] (0xc0009be630) Go away received\nI0211 01:03:42.049538    2611 log.go:172] (0xc0009be630) (0xc00065bf40) Stream removed, broadcasting: 1\nI0211 01:03:42.049605    2611 log.go:172] (0xc0009be630) (0xc000620820) Stream removed, broadcasting: 3\nI0211 01:03:42.049636    2611 log.go:172] (0xc0009be630) (0xc00072d4a0) Stream removed, broadcasting: 5\n"
Feb 11 01:03:42.055: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:03:42.055: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:03:42.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:03:42.480: INFO: stderr: "I0211 01:03:42.288474    2632 log.go:172] (0xc000b76370) (0xc000a1a140) Create stream\nI0211 01:03:42.288606    2632 log.go:172] (0xc000b76370) (0xc000a1a140) Stream added, broadcasting: 1\nI0211 01:03:42.297654    2632 log.go:172] (0xc000b76370) Reply frame received for 1\nI0211 01:03:42.298415    2632 log.go:172] (0xc000b76370) (0xc0009fa280) Create stream\nI0211 01:03:42.298603    2632 log.go:172] (0xc000b76370) (0xc0009fa280) Stream added, broadcasting: 3\nI0211 01:03:42.304640    2632 log.go:172] (0xc000b76370) Reply frame received for 3\nI0211 01:03:42.305045    2632 log.go:172] (0xc000b76370) (0xc000aca000) Create stream\nI0211 01:03:42.305120    2632 log.go:172] (0xc000b76370) (0xc000aca000) Stream added, broadcasting: 5\nI0211 01:03:42.307114    2632 log.go:172] (0xc000b76370) Reply frame received for 5\nI0211 01:03:42.383282    2632 log.go:172] (0xc000b76370) Data frame received for 5\nI0211 01:03:42.383333    2632 log.go:172] (0xc000aca000) (5) Data frame handling\nI0211 01:03:42.383365    2632 log.go:172] (0xc000aca000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:03:42.405694    2632 log.go:172] (0xc000b76370) Data frame received for 3\nI0211 01:03:42.405722    2632 log.go:172] (0xc0009fa280) (3) Data frame handling\nI0211 01:03:42.405743    2632 log.go:172] (0xc0009fa280) (3) Data frame sent\nI0211 01:03:42.467685    2632 log.go:172] (0xc000b76370) Data frame received for 1\nI0211 01:03:42.467770    2632 log.go:172] (0xc000a1a140) (1) Data frame handling\nI0211 01:03:42.467805    2632 log.go:172] (0xc000a1a140) (1) Data frame sent\nI0211 01:03:42.467850    2632 log.go:172] (0xc000b76370) (0xc000a1a140) Stream removed, broadcasting: 1\nI0211 01:03:42.468649    2632 log.go:172] (0xc000b76370) (0xc0009fa280) Stream removed, broadcasting: 3\nI0211 01:03:42.469098    2632 log.go:172] (0xc000b76370) (0xc000aca000) Stream removed, broadcasting: 5\nI0211 01:03:42.469290    2632 log.go:172] (0xc000b76370) Go away received\nI0211 01:03:42.469787    2632 log.go:172] (0xc000b76370) (0xc000a1a140) Stream removed, broadcasting: 1\nI0211 01:03:42.469870    2632 log.go:172] (0xc000b76370) (0xc0009fa280) Stream removed, broadcasting: 3\nI0211 01:03:42.469902    2632 log.go:172] (0xc000b76370) (0xc000aca000) Stream removed, broadcasting: 5\n"
Feb 11 01:03:42.481: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:03:42.481: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:03:42.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2463 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:03:42.951: INFO: stderr: "I0211 01:03:42.688728    2655 log.go:172] (0xc000b74dc0) (0xc000bda140) Create stream\nI0211 01:03:42.688999    2655 log.go:172] (0xc000b74dc0) (0xc000bda140) Stream added, broadcasting: 1\nI0211 01:03:42.698145    2655 log.go:172] (0xc000b74dc0) Reply frame received for 1\nI0211 01:03:42.698224    2655 log.go:172] (0xc000b74dc0) (0xc000bda1e0) Create stream\nI0211 01:03:42.698240    2655 log.go:172] (0xc000b74dc0) (0xc000bda1e0) Stream added, broadcasting: 3\nI0211 01:03:42.699594    2655 log.go:172] (0xc000b74dc0) Reply frame received for 3\nI0211 01:03:42.699671    2655 log.go:172] (0xc000b74dc0) (0xc000bae0a0) Create stream\nI0211 01:03:42.699683    2655 log.go:172] (0xc000b74dc0) (0xc000bae0a0) Stream added, broadcasting: 5\nI0211 01:03:42.702164    2655 log.go:172] (0xc000b74dc0) Reply frame received for 5\nI0211 01:03:42.789832    2655 log.go:172] (0xc000b74dc0) Data frame received for 5\nI0211 01:03:42.789858    2655 log.go:172] (0xc000bae0a0) (5) Data frame handling\nI0211 01:03:42.789876    2655 log.go:172] (0xc000bae0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:03:42.825623    2655 log.go:172] (0xc000b74dc0) Data frame received for 3\nI0211 01:03:42.825649    2655 log.go:172] (0xc000bda1e0) (3) Data frame handling\nI0211 01:03:42.825668    2655 log.go:172] (0xc000bda1e0) (3) Data frame sent\nI0211 01:03:42.936959    2655 log.go:172] (0xc000b74dc0) Data frame received for 1\nI0211 01:03:42.937097    2655 log.go:172] (0xc000bda140) (1) Data frame handling\nI0211 01:03:42.937169    2655 log.go:172] (0xc000b74dc0) (0xc000bda1e0) Stream removed, broadcasting: 3\nI0211 01:03:42.937270    2655 log.go:172] (0xc000b74dc0) (0xc000bae0a0) Stream removed, broadcasting: 5\nI0211 01:03:42.937318    2655 log.go:172] (0xc000bda140) (1) Data frame sent\nI0211 01:03:42.937334    2655 log.go:172] (0xc000b74dc0) (0xc000bda140) Stream removed, broadcasting: 1\nI0211 01:03:42.937345    2655 log.go:172] (0xc000b74dc0) Go away received\nI0211 01:03:42.939340    2655 log.go:172] (0xc000b74dc0) (0xc000bda140) Stream removed, broadcasting: 1\nI0211 01:03:42.939367    2655 log.go:172] (0xc000b74dc0) (0xc000bda1e0) Stream removed, broadcasting: 3\nI0211 01:03:42.939380    2655 log.go:172] (0xc000b74dc0) (0xc000bae0a0) Stream removed, broadcasting: 5\n"
Feb 11 01:03:42.951: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:03:42.951: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:03:42.951: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:03:42.995: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 11 01:03:53.018: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:03:53.018: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:03:53.018: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:03:53.097: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:53.098: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:53.098: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:53.098: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:53.098: INFO: 
Feb 11 01:03:53.098: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:03:54.613: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:54.614: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:54.614: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:54.614: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:54.614: INFO: 
Feb 11 01:03:54.614: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:03:55.625: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:55.625: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:55.625: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:55.625: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:55.626: INFO: 
Feb 11 01:03:55.626: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:03:57.206: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:57.206: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:57.206: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:57.206: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:57.207: INFO: 
Feb 11 01:03:57.207: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:03:58.216: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:58.216: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:58.217: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:58.217: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:58.217: INFO: 
Feb 11 01:03:58.217: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:03:59.224: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 11 01:03:59.224: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:03:59.225: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:59.225: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:03:59.225: INFO: 
Feb 11 01:03:59.225: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 11 01:04:00.231: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 01:04:00.231: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:04:00.232: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:04:00.232: INFO: 
Feb 11 01:04:00.232: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 11 01:04:01.241: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 01:04:01.241: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:04:01.241: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:04:01.241: INFO: 
Feb 11 01:04:01.241: INFO: StatefulSet ss has not reached scale 0, at 2
Feb 11 01:04:02.249: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 11 01:04:02.249: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:09 +0000 UTC  }]
Feb 11 01:04:02.249: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:43 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-11 01:03:30 +0000 UTC  }]
Feb 11 01:04:02.249: INFO: 
Feb 11 01:04:02.249: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2463
Feb 11 01:04:03.264: INFO: Scaling statefulset ss to 0
Feb 11 01:04:03.287: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 11 01:04:03.291: INFO: Deleting all statefulset in ns statefulset-2463
Feb 11 01:04:03.295: INFO: Scaling statefulset ss to 0
Feb 11 01:04:03.307: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:04:03.309: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:04:03.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2463" for this suite.

• [SLOW TEST:53.915 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":205,"skipped":3411,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:04:03.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 11 01:04:03.544: INFO: Number of nodes with available pods: 0
Feb 11 01:04:03.544: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:05.357: INFO: Number of nodes with available pods: 0
Feb 11 01:04:05.357: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:05.717: INFO: Number of nodes with available pods: 0
Feb 11 01:04:05.717: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:06.762: INFO: Number of nodes with available pods: 0
Feb 11 01:04:06.763: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:07.563: INFO: Number of nodes with available pods: 0
Feb 11 01:04:07.563: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:09.420: INFO: Number of nodes with available pods: 0
Feb 11 01:04:09.421: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:09.793: INFO: Number of nodes with available pods: 0
Feb 11 01:04:09.793: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:10.705: INFO: Number of nodes with available pods: 0
Feb 11 01:04:10.705: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:11.574: INFO: Number of nodes with available pods: 1
Feb 11 01:04:11.574: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 01:04:12.562: INFO: Number of nodes with available pods: 1
Feb 11 01:04:12.562: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 11 01:04:13.561: INFO: Number of nodes with available pods: 2
Feb 11 01:04:13.561: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 11 01:04:13.753: INFO: Number of nodes with available pods: 1
Feb 11 01:04:13.754: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:14.765: INFO: Number of nodes with available pods: 1
Feb 11 01:04:14.765: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:15.767: INFO: Number of nodes with available pods: 1
Feb 11 01:04:15.767: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:16.771: INFO: Number of nodes with available pods: 1
Feb 11 01:04:16.771: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:17.779: INFO: Number of nodes with available pods: 1
Feb 11 01:04:17.779: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:18.774: INFO: Number of nodes with available pods: 1
Feb 11 01:04:18.774: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:19.777: INFO: Number of nodes with available pods: 1
Feb 11 01:04:19.777: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:20.772: INFO: Number of nodes with available pods: 1
Feb 11 01:04:20.773: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:04:21.766: INFO: Number of nodes with available pods: 2
Feb 11 01:04:21.766: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4573, will wait for the garbage collector to delete the pods
Feb 11 01:04:21.839: INFO: Deleting DaemonSet.extensions daemon-set took: 9.189453ms
Feb 11 01:04:22.140: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297175ms
Feb 11 01:04:32.445: INFO: Number of nodes with available pods: 0
Feb 11 01:04:32.445: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 01:04:32.448: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4573/daemonsets","resourceVersion":"7649898"},"items":null}

Feb 11 01:04:32.450: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4573/pods","resourceVersion":"7649898"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:04:32.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4573" for this suite.

• [SLOW TEST:29.082 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":206,"skipped":3411,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:04:32.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-c8070967-cbc5-4014-bfd0-c880f998737c
STEP: Creating a pod to test consume secrets
Feb 11 01:04:32.669: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2" in namespace "projected-3220" to be "success or failure"
Feb 11 01:04:32.683: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.570633ms
Feb 11 01:04:34.692: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022353064s
Feb 11 01:04:36.696: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026945952s
Feb 11 01:04:38.702: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032849052s
Feb 11 01:04:40.715: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045086186s
STEP: Saw pod success
Feb 11 01:04:40.715: INFO: Pod "pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2" satisfied condition "success or failure"
Feb 11 01:04:40.720: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2 container projected-secret-volume-test: 
STEP: delete the pod
Feb 11 01:04:40.816: INFO: Waiting for pod pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2 to disappear
Feb 11 01:04:40.832: INFO: Pod pod-projected-secrets-5ce3bef0-b00e-4e97-b79f-6fb3f894e5c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:04:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3220" for this suite.

• [SLOW TEST:8.372 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3475,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:04:40.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:04:40.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 11 01:04:41.579: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:41Z generation:1 name:name1 resourceVersion:7649971 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8cba0870-5993-4f10-a4f0-73eb54ab1e6a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 11 01:04:51.589: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:51Z generation:1 name:name2 resourceVersion:7650004 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64a4397d-fd3f-46c5-a0dc-bbcfe02f8e5b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 11 01:05:01.599: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:41Z generation:2 name:name1 resourceVersion:7650028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8cba0870-5993-4f10-a4f0-73eb54ab1e6a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 11 01:05:11.618: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:51Z generation:2 name:name2 resourceVersion:7650052 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64a4397d-fd3f-46c5-a0dc-bbcfe02f8e5b] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 11 01:05:21.631: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:41Z generation:2 name:name1 resourceVersion:7650076 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8cba0870-5993-4f10-a4f0-73eb54ab1e6a] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 11 01:05:31.643: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-11T01:04:51Z generation:2 name:name2 resourceVersion:7650100 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:64a4397d-fd3f-46c5-a0dc-bbcfe02f8e5b] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:05:42.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-1659" for this suite.

• [SLOW TEST:61.334 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":208,"skipped":3478,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:05:42.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0211 01:05:52.614207       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 01:05:52.614: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:05:52.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5255" for this suite.

• [SLOW TEST:10.463 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":209,"skipped":3493,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:05:52.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-b7a2c823-bb20-487d-a324-722ab79d5dba
STEP: Creating a pod to test consume configMaps
Feb 11 01:05:52.782: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f" in namespace "projected-1894" to be "success or failure"
Feb 11 01:05:52.785: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112318ms
Feb 11 01:05:54.792: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010835394s
Feb 11 01:05:56.797: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015634078s
Feb 11 01:05:58.801: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019784676s
Feb 11 01:06:00.822: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040551584s
STEP: Saw pod success
Feb 11 01:06:00.822: INFO: Pod "pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f" satisfied condition "success or failure"
Feb 11 01:06:00.825: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 01:06:00.874: INFO: Waiting for pod pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f to disappear
Feb 11 01:06:00.878: INFO: Pod pod-projected-configmaps-3c7a1898-23f7-4869-8d80-150f103f0d6f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:00.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1894" for this suite.

• [SLOW TEST:8.245 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3513,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:00.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 11 01:06:01.197: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 11 01:06:01.219: INFO: Waiting for terminating namespaces to be deleted...
Feb 11 01:06:01.223: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 11 01:06:01.231: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.231: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 01:06:01.231: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 11 01:06:01.231: INFO: 	Container weave ready: true, restart count 1
Feb 11 01:06:01.231: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 01:06:01.231: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 11 01:06:01.247: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container coredns ready: true, restart count 0
Feb 11 01:06:01.248: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container coredns ready: true, restart count 0
Feb 11 01:06:01.248: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container kube-controller-manager ready: true, restart count 5
Feb 11 01:06:01.248: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 11 01:06:01.248: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container weave ready: true, restart count 0
Feb 11 01:06:01.248: INFO: 	Container weave-npc ready: true, restart count 0
Feb 11 01:06:01.248: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container kube-scheduler ready: true, restart count 7
Feb 11 01:06:01.248: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 11 01:06:01.248: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 11 01:06:01.248: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8482dc53-247d-4bff-8c16-b2c9025aa9db 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8482dc53-247d-4bff-8c16-b2c9025aa9db off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8482dc53-247d-4bff-8c16-b2c9025aa9db
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:19.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2907" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:18.773 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":280,"completed":211,"skipped":3536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:19.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Feb 11 01:06:19.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9579 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 11 01:06:27.508: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0211 01:06:26.805079    2675 log.go:172] (0xc000758160) (0xc000698280) Create stream\nI0211 01:06:26.805324    2675 log.go:172] (0xc000758160) (0xc000698280) Stream added, broadcasting: 1\nI0211 01:06:26.814736    2675 log.go:172] (0xc000758160) Reply frame received for 1\nI0211 01:06:26.814838    2675 log.go:172] (0xc000758160) (0xc000698320) Create stream\nI0211 01:06:26.814863    2675 log.go:172] (0xc000758160) (0xc000698320) Stream added, broadcasting: 3\nI0211 01:06:26.823827    2675 log.go:172] (0xc000758160) Reply frame received for 3\nI0211 01:06:26.823911    2675 log.go:172] (0xc000758160) (0xc0007e6500) Create stream\nI0211 01:06:26.823926    2675 log.go:172] (0xc000758160) (0xc0007e6500) Stream added, broadcasting: 5\nI0211 01:06:26.825410    2675 log.go:172] (0xc000758160) Reply frame received for 5\nI0211 01:06:26.825460    2675 log.go:172] (0xc000758160) (0xc000a90000) Create stream\nI0211 01:06:26.825475    2675 log.go:172] (0xc000758160) (0xc000a90000) Stream added, broadcasting: 7\nI0211 01:06:26.826674    2675 log.go:172] (0xc000758160) Reply frame received for 7\nI0211 01:06:26.827216    2675 log.go:172] (0xc000698320) (3) Writing data frame\nI0211 01:06:26.827634    2675 log.go:172] (0xc000698320) (3) Writing data frame\nI0211 01:06:26.836112    2675 log.go:172] (0xc000758160) Data frame received for 5\nI0211 01:06:26.836165    2675 log.go:172] (0xc0007e6500) (5) Data frame handling\nI0211 01:06:26.836193    2675 log.go:172] (0xc0007e6500) (5) Data frame sent\nI0211 01:06:26.837576    2675 log.go:172] (0xc000758160) Data frame received for 5\nI0211 01:06:26.837617    2675 log.go:172] (0xc0007e6500) (5) Data frame handling\nI0211 01:06:26.837634    2675 log.go:172] (0xc0007e6500) (5) Data frame sent\nI0211 01:06:27.443139    2675 log.go:172] (0xc000758160) Data frame received for 1\nI0211 01:06:27.443252    2675 log.go:172] (0xc000698280) (1) Data frame handling\nI0211 01:06:27.443351    2675 log.go:172] (0xc000698280) (1) Data frame sent\nI0211 01:06:27.444798    2675 log.go:172] (0xc000758160) (0xc000698320) Stream removed, broadcasting: 3\nI0211 01:06:27.445297    2675 log.go:172] (0xc000758160) (0xc000698280) Stream removed, broadcasting: 1\nI0211 01:06:27.446887    2675 log.go:172] (0xc000758160) (0xc0007e6500) Stream removed, broadcasting: 5\nI0211 01:06:27.447300    2675 log.go:172] (0xc000758160) (0xc000a90000) Stream removed, broadcasting: 7\nI0211 01:06:27.447465    2675 log.go:172] (0xc000758160) Go away received\nI0211 01:06:27.447646    2675 log.go:172] (0xc000758160) (0xc000698280) Stream removed, broadcasting: 1\nI0211 01:06:27.447715    2675 log.go:172] (0xc000758160) (0xc000698320) Stream removed, broadcasting: 3\nI0211 01:06:27.447728    2675 log.go:172] (0xc000758160) (0xc0007e6500) Stream removed, broadcasting: 5\nI0211 01:06:27.447743    2675 log.go:172] (0xc000758160) (0xc000a90000) Stream removed, broadcasting: 7\n"
Feb 11 01:06:27.509: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:29.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9579" for this suite.

• [SLOW TEST:9.891 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":212,"skipped":3542,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:29.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:29.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2878" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":213,"skipped":3561,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:29.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-cea5325f-57fb-42b5-9603-84c95f22357e
STEP: Creating a pod to test consume secrets
Feb 11 01:06:30.024: INFO: Waiting up to 5m0s for pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55" in namespace "secrets-5681" to be "success or failure"
Feb 11 01:06:30.070: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 45.93423ms
Feb 11 01:06:32.077: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053110741s
Feb 11 01:06:34.083: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059323026s
Feb 11 01:06:36.091: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066875373s
Feb 11 01:06:38.099: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075498428s
STEP: Saw pod success
Feb 11 01:06:38.100: INFO: Pod "pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55" satisfied condition "success or failure"
Feb 11 01:06:38.104: INFO: Trying to get logs from node jerma-node pod pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55 container secret-volume-test: 
STEP: delete the pod
Feb 11 01:06:38.154: INFO: Waiting for pod pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55 to disappear
Feb 11 01:06:38.206: INFO: Pod pod-secrets-8319678d-2805-4c14-95f0-094f9a18cd55 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:38.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5681" for this suite.

• [SLOW TEST:8.475 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3567,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:38.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:06:38.406: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39" in namespace "downward-api-7136" to be "success or failure"
Feb 11 01:06:38.427: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39": Phase="Pending", Reason="", readiness=false. Elapsed: 21.250314ms
Feb 11 01:06:40.442: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035884132s
Feb 11 01:06:42.448: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042434626s
Feb 11 01:06:44.458: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052023033s
Feb 11 01:06:46.469: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062732617s
STEP: Saw pod success
Feb 11 01:06:46.469: INFO: Pod "downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39" satisfied condition "success or failure"
Feb 11 01:06:46.472: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39 container client-container: 
STEP: delete the pod
Feb 11 01:06:46.518: INFO: Waiting for pod downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39 to disappear
Feb 11 01:06:46.534: INFO: Pod downwardapi-volume-88c3aa50-dd74-44dc-a998-a24d17af5c39 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:06:46.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7136" for this suite.

• [SLOW TEST:8.312 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3582,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:06:46.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 in namespace container-probe-287
Feb 11 01:06:56.668: INFO: Started pod liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 in namespace container-probe-287
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 01:06:56.674: INFO: Initial restart count of pod liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is 0
Feb 11 01:07:08.763: INFO: Restart count of pod container-probe-287/liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is now 1 (12.089482131s elapsed)
Feb 11 01:07:28.842: INFO: Restart count of pod container-probe-287/liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is now 2 (32.167811017s elapsed)
Feb 11 01:07:48.921: INFO: Restart count of pod container-probe-287/liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is now 3 (52.247226512s elapsed)
Feb 11 01:08:11.027: INFO: Restart count of pod container-probe-287/liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is now 4 (1m14.352930109s elapsed)
Feb 11 01:09:09.502: INFO: Restart count of pod container-probe-287/liveness-42827cd9-f2a1-408e-994e-efefe4b4ef52 is now 5 (2m12.8277355s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:09:09.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-287" for this suite.

• [SLOW TEST:143.110 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3587,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:09:09.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 01:09:22.008: INFO: DNS probes using dns-test-30a52da1-06ca-4c1a-b2ad-9260c0ebe5f4 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 01:09:34.190: INFO: File wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 01:09:34.193: INFO: File jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 01:09:34.193: INFO: Lookups using dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab failed for: [wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local]

Feb 11 01:09:39.209: INFO: File wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains '' instead of 'bar.example.com.'
Feb 11 01:09:39.215: INFO: File jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 01:09:39.215: INFO: Lookups using dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab failed for: [wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local]

Feb 11 01:09:44.203: INFO: File wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 01:09:44.208: INFO: File jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local from pod  dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 11 01:09:44.208: INFO: Lookups using dns-5249/dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab failed for: [wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local]

Feb 11 01:09:49.214: INFO: DNS probes using dns-test-f99ffa16-3a38-4648-97a5-91a4cf7d77ab succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5249.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5249.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 01:10:01.585: INFO: DNS probes using dns-test-3d35eead-4a81-443c-bd19-a4fff7b6b02c succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:10:01.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5249" for this suite.

• [SLOW TEST:52.067 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":217,"skipped":3610,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:10:01.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-04427482-8504-4f08-9488-a5ff5c63b0f3 in namespace container-probe-6856
Feb 11 01:10:11.921: INFO: Started pod busybox-04427482-8504-4f08-9488-a5ff5c63b0f3 in namespace container-probe-6856
STEP: checking the pod's current state and verifying that restartCount is present
Feb 11 01:10:11.925: INFO: Initial restart count of pod busybox-04427482-8504-4f08-9488-a5ff5c63b0f3 is 0
Feb 11 01:11:08.274: INFO: Restart count of pod container-probe-6856/busybox-04427482-8504-4f08-9488-a5ff5c63b0f3 is now 1 (56.348524703s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:08.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6856" for this suite.

• [SLOW TEST:66.610 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":218,"skipped":3617,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:08.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:08.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9119" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":219,"skipped":3637,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:08.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 01:11:16.774: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1837" for this suite.

• [SLOW TEST:8.300 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":220,"skipped":3644,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:16.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:11:17.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783" in namespace "downward-api-3502" to be "success or failure"
Feb 11 01:11:17.170: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783": Phase="Pending", Reason="", readiness=false. Elapsed: 8.902441ms
Feb 11 01:11:19.177: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016311082s
Feb 11 01:11:21.183: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021743696s
Feb 11 01:11:23.190: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028687892s
Feb 11 01:11:25.198: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037220848s
STEP: Saw pod success
Feb 11 01:11:25.198: INFO: Pod "downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783" satisfied condition "success or failure"
Feb 11 01:11:25.204: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783 container client-container: 
STEP: delete the pod
Feb 11 01:11:25.262: INFO: Waiting for pod downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783 to disappear
Feb 11 01:11:25.325: INFO: Pod downwardapi-volume-ee51c2bf-acc8-4e79-9de4-f3365926e783 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:25.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3502" for this suite.

• [SLOW TEST:8.439 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":221,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:25.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:11:25.584: INFO: Creating ReplicaSet my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082
Feb 11 01:11:25.694: INFO: Pod name my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082: Found 0 pods out of 1
Feb 11 01:11:30.723: INFO: Pod name my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082: Found 1 pods out of 1
Feb 11 01:11:30.723: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082" is running
Feb 11 01:11:32.740: INFO: Pod "my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082-rrg64" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 01:11:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 01:11:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 01:11:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-11 01:11:25 +0000 UTC Reason: Message:}])
Feb 11 01:11:32.740: INFO: Trying to dial the pod
Feb 11 01:11:37.766: INFO: Controller my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082: Got expected result from replica 1 [my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082-rrg64]: "my-hostname-basic-1821d49d-fcf2-4755-8826-6f7bf540f082-rrg64", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:37.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5326" for this suite.

• [SLOW TEST:12.456 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":222,"skipped":3654,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:37.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-f5eff4d0-729b-4a23-b829-48b2ea5ef513
STEP: Creating a pod to test consume secrets
Feb 11 01:11:38.100: INFO: Waiting up to 5m0s for pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760" in namespace "secrets-4567" to be "success or failure"
Feb 11 01:11:38.136: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760": Phase="Pending", Reason="", readiness=false. Elapsed: 36.033487ms
Feb 11 01:11:40.143: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042769253s
Feb 11 01:11:42.150: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049508514s
Feb 11 01:11:44.156: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055907084s
Feb 11 01:11:46.162: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061767181s
STEP: Saw pod success
Feb 11 01:11:46.162: INFO: Pod "pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760" satisfied condition "success or failure"
Feb 11 01:11:46.187: INFO: Trying to get logs from node jerma-node pod pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760 container secret-volume-test: 
STEP: delete the pod
Feb 11 01:11:46.222: INFO: Waiting for pod pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760 to disappear
Feb 11 01:11:46.300: INFO: Pod pod-secrets-7a35c920-da74-451c-bce5-c85ba48bd760 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:46.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4567" for this suite.

• [SLOW TEST:8.517 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":223,"skipped":3674,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:46.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:11:46.446: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:11:46.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9806" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":224,"skipped":3691,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:11:46.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 11 01:11:47.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:12:03.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1115" for this suite.

• [SLOW TEST:16.840 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":225,"skipped":3712,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:12:03.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-08bb1646-d429-4ff7-9d59-09c33b31e2a7
STEP: Creating secret with name s-test-opt-upd-665dd95b-d2a9-4aaa-b074-b9a921664e8a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-08bb1646-d429-4ff7-9d59-09c33b31e2a7
STEP: Updating secret s-test-opt-upd-665dd95b-d2a9-4aaa-b074-b9a921664e8a
STEP: Creating secret with name s-test-opt-create-e66e3d97-24a7-471d-9ee3-30107e8f5bf5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:12:16.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8134" for this suite.

• [SLOW TEST:12.392 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":226,"skipped":3757,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:12:16.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:12:16.193: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 11 01:12:21.206: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 11 01:12:21.251: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 11 01:12:35.434: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1035 /apis/apps/v1/namespaces/deployment-1035/deployments/test-cleanup-deployment 64959367-db5a-4fce-8275-9ac008d6ebd1 7651718 1 2020-02-11 01:12:21 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00385a478  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-11 01:12:21 +0000 UTC,LastTransitionTime:2020-02-11 01:12:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-02-11 01:12:33 +0000 UTC,LastTransitionTime:2020-02-11 01:12:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 11 01:12:35.437: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-1035 /apis/apps/v1/namespaces/deployment-1035/replicasets/test-cleanup-deployment-55ffc6b7b6 2769906a-0773-4447-bc37-439f138654ba 7651707 1 2020-02-11 01:12:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 64959367-db5a-4fce-8275-9ac008d6ebd1 0xc00385a987 0xc00385a988}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00385a9f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 11 01:12:35.439: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-cxzl8" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-cxzl8 test-cleanup-deployment-55ffc6b7b6- deployment-1035 /api/v1/namespaces/deployment-1035/pods/test-cleanup-deployment-55ffc6b7b6-cxzl8 f9ae8ebe-3176-46e1-aabc-322cab5b550c 7651706 0 2020-02-11 01:12:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 2769906a-0773-4447-bc37-439f138654ba 0xc00385afd7 0xc00385afd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lc9hb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lc9hb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lc9hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:12:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:12:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:12:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:12:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-11 01:12:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 01:12:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://8c19e11a76d696103eafb1e7217a064d195d10296c24da3172be5d6525dc2ca9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:12:35.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1035" for this suite.

• [SLOW TEST:19.387 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":227,"skipped":3762,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:12:35.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-d889257e-4698-4d4a-90f0-65c5338dd9f3
STEP: Creating a pod to test consume configMaps
Feb 11 01:12:35.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716" in namespace "configmap-3495" to be "success or failure"
Feb 11 01:12:35.737: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716": Phase="Pending", Reason="", readiness=false. Elapsed: 15.739491ms
Feb 11 01:12:37.742: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020954463s
Feb 11 01:12:39.747: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025711102s
Feb 11 01:12:42.603: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716": Phase="Pending", Reason="", readiness=false. Elapsed: 6.882084399s
Feb 11 01:12:44.609: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.887728935s
STEP: Saw pod success
Feb 11 01:12:44.609: INFO: Pod "pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716" satisfied condition "success or failure"
Feb 11 01:12:44.611: INFO: Trying to get logs from node jerma-node pod pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716 container configmap-volume-test: 
STEP: delete the pod
Feb 11 01:12:44.647: INFO: Waiting for pod pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716 to disappear
Feb 11 01:12:44.653: INFO: Pod pod-configmaps-30fb5f45-a745-4f5f-a111-1588986d3716 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:12:44.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3495" for this suite.

• [SLOW TEST:9.212 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3769,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:12:44.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:12:46.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2" in namespace "projected-5493" to be "success or failure"
Feb 11 01:12:46.266: INFO: Pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.636496ms
Feb 11 01:12:48.273: INFO: Pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02610692s
Feb 11 01:12:50.280: INFO: Pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033335897s
Feb 11 01:12:52.287: INFO: Pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040771109s
STEP: Saw pod success
Feb 11 01:12:52.288: INFO: Pod "downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2" satisfied condition "success or failure"
Feb 11 01:12:52.291: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2 container client-container: 
STEP: delete the pod
Feb 11 01:12:52.321: INFO: Waiting for pod downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2 to disappear
Feb 11 01:12:52.346: INFO: Pod downwardapi-volume-c7064ef9-bbc9-4e02-88fb-a43cb554a2a2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:12:52.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5493" for this suite.

• [SLOW TEST:7.697 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3776,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:12:52.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 11 01:12:52.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9173'
Feb 11 01:12:55.424: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 11 01:12:55.424: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Feb 11 01:12:55.478: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 11 01:12:55.514: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 11 01:12:55.531: INFO: scanned /root for discovery docs: 
Feb 11 01:12:55.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9173'
Feb 11 01:13:16.334: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 11 01:13:16.334: INFO: stdout: "Created e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b\nScaling up e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 11 01:13:16.334: INFO: stdout: "Created e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b\nScaling up e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 11 01:13:16.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9173'
Feb 11 01:13:16.648: INFO: stderr: ""
Feb 11 01:13:16.648: INFO: stdout: "e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk e2e-test-httpd-rc-gghhz "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Feb 11 01:13:21.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9173'
Feb 11 01:13:21.838: INFO: stderr: ""
Feb 11 01:13:21.838: INFO: stdout: "e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk e2e-test-httpd-rc-gghhz "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Feb 11 01:13:26.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9173'
Feb 11 01:13:26.985: INFO: stderr: ""
Feb 11 01:13:26.985: INFO: stdout: "e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk "
Feb 11 01:13:26.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9173'
Feb 11 01:13:27.108: INFO: stderr: ""
Feb 11 01:13:27.108: INFO: stdout: "true"
Feb 11 01:13:27.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9173'
Feb 11 01:13:27.221: INFO: stderr: ""
Feb 11 01:13:27.221: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 11 01:13:27.221: INFO: e2e-test-httpd-rc-ac65b92d79fd2f667a8653855127c64b-79phk is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
Feb 11 01:13:27.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9173'
Feb 11 01:13:27.309: INFO: stderr: ""
Feb 11 01:13:27.309: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:13:27.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9173" for this suite.

• [SLOW TEST:34.957 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":280,"completed":230,"skipped":3795,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:13:27.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 11 01:13:27.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-829'
Feb 11 01:13:27.934: INFO: stderr: ""
Feb 11 01:13:27.935: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 01:13:27.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:28.061: INFO: stderr: ""
Feb 11 01:13:28.062: INFO: stdout: "update-demo-nautilus-kvhwv update-demo-nautilus-x7jmf "
Feb 11 01:13:28.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:28.183: INFO: stderr: ""
Feb 11 01:13:28.183: INFO: stdout: ""
Feb 11 01:13:28.183: INFO: update-demo-nautilus-kvhwv is created but not running
Feb 11 01:13:33.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:33.466: INFO: stderr: ""
Feb 11 01:13:33.466: INFO: stdout: "update-demo-nautilus-kvhwv update-demo-nautilus-x7jmf "
Feb 11 01:13:33.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:34.734: INFO: stderr: ""
Feb 11 01:13:34.734: INFO: stdout: ""
Feb 11 01:13:34.734: INFO: update-demo-nautilus-kvhwv is created but not running
Feb 11 01:13:39.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:39.899: INFO: stderr: ""
Feb 11 01:13:39.899: INFO: stdout: "update-demo-nautilus-kvhwv update-demo-nautilus-x7jmf "
Feb 11 01:13:39.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:40.051: INFO: stderr: ""
Feb 11 01:13:40.051: INFO: stdout: "true"
Feb 11 01:13:40.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:40.134: INFO: stderr: ""
Feb 11 01:13:40.134: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:13:40.134: INFO: validating pod update-demo-nautilus-kvhwv
Feb 11 01:13:40.144: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:13:40.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:13:40.145: INFO: update-demo-nautilus-kvhwv is verified up and running
Feb 11 01:13:40.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7jmf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:40.285: INFO: stderr: ""
Feb 11 01:13:40.286: INFO: stdout: "true"
Feb 11 01:13:40.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7jmf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:40.408: INFO: stderr: ""
Feb 11 01:13:40.408: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:13:40.408: INFO: validating pod update-demo-nautilus-x7jmf
Feb 11 01:13:40.412: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:13:40.412: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:13:40.412: INFO: update-demo-nautilus-x7jmf is verified up and running
STEP: scaling down the replication controller
Feb 11 01:13:40.414: INFO: scanned /root for discovery docs: 
Feb 11 01:13:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-829'
Feb 11 01:13:41.580: INFO: stderr: ""
Feb 11 01:13:41.580: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 01:13:41.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:41.766: INFO: stderr: ""
Feb 11 01:13:41.766: INFO: stdout: "update-demo-nautilus-kvhwv update-demo-nautilus-x7jmf "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 11 01:13:46.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:46.975: INFO: stderr: ""
Feb 11 01:13:46.975: INFO: stdout: "update-demo-nautilus-kvhwv "
Feb 11 01:13:46.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:47.144: INFO: stderr: ""
Feb 11 01:13:47.144: INFO: stdout: "true"
Feb 11 01:13:47.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:47.247: INFO: stderr: ""
Feb 11 01:13:47.247: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:13:47.247: INFO: validating pod update-demo-nautilus-kvhwv
Feb 11 01:13:47.255: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:13:47.255: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:13:47.255: INFO: update-demo-nautilus-kvhwv is verified up and running
STEP: scaling up the replication controller
Feb 11 01:13:47.259: INFO: scanned /root for discovery docs: 
Feb 11 01:13:47.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-829'
Feb 11 01:13:48.607: INFO: stderr: ""
Feb 11 01:13:48.607: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 11 01:13:48.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:48.738: INFO: stderr: ""
Feb 11 01:13:48.739: INFO: stdout: "update-demo-nautilus-4zxqf update-demo-nautilus-kvhwv "
Feb 11 01:13:48.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:48.858: INFO: stderr: ""
Feb 11 01:13:48.858: INFO: stdout: ""
Feb 11 01:13:48.858: INFO: update-demo-nautilus-4zxqf is created but not running
Feb 11 01:13:53.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:54.048: INFO: stderr: ""
Feb 11 01:13:54.049: INFO: stdout: "update-demo-nautilus-4zxqf update-demo-nautilus-kvhwv "
Feb 11 01:13:54.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:54.126: INFO: stderr: ""
Feb 11 01:13:54.126: INFO: stdout: ""
Feb 11 01:13:54.126: INFO: update-demo-nautilus-4zxqf is created but not running
Feb 11 01:13:59.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-829'
Feb 11 01:13:59.273: INFO: stderr: ""
Feb 11 01:13:59.273: INFO: stdout: "update-demo-nautilus-4zxqf update-demo-nautilus-kvhwv "
Feb 11 01:13:59.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zxqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:59.381: INFO: stderr: ""
Feb 11 01:13:59.382: INFO: stdout: "true"
Feb 11 01:13:59.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4zxqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:59.502: INFO: stderr: ""
Feb 11 01:13:59.502: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:13:59.502: INFO: validating pod update-demo-nautilus-4zxqf
Feb 11 01:13:59.507: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:13:59.507: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:13:59.507: INFO: update-demo-nautilus-4zxqf is verified up and running
Feb 11 01:13:59.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:59.623: INFO: stderr: ""
Feb 11 01:13:59.623: INFO: stdout: "true"
Feb 11 01:13:59.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kvhwv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-829'
Feb 11 01:13:59.723: INFO: stderr: ""
Feb 11 01:13:59.723: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 11 01:13:59.723: INFO: validating pod update-demo-nautilus-kvhwv
Feb 11 01:13:59.731: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 11 01:13:59.731: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 11 01:13:59.731: INFO: update-demo-nautilus-kvhwv is verified up and running
STEP: using delete to clean up resources
Feb 11 01:13:59.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-829'
Feb 11 01:13:59.865: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 11 01:13:59.866: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 11 01:13:59.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-829'
Feb 11 01:13:59.997: INFO: stderr: "No resources found in kubectl-829 namespace.\n"
Feb 11 01:13:59.997: INFO: stdout: ""
Feb 11 01:13:59.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-829 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 01:14:00.112: INFO: stderr: ""
Feb 11 01:14:00.112: INFO: stdout: "update-demo-nautilus-4zxqf\nupdate-demo-nautilus-kvhwv\n"
Feb 11 01:14:00.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-829'
Feb 11 01:14:00.765: INFO: stderr: "No resources found in kubectl-829 namespace.\n"
Feb 11 01:14:00.765: INFO: stdout: ""
Feb 11 01:14:00.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-829 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 11 01:14:01.349: INFO: stderr: ""
Feb 11 01:14:01.349: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:14:01.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-829" for this suite.

• [SLOW TEST:34.090 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":231,"skipped":3803,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:14:01.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2246
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2246
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2246
Feb 11 01:14:01.940: INFO: Found 0 stateful pods, waiting for 1
Feb 11 01:14:11.946: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 11 01:14:11.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:14:12.329: INFO: stderr: "I0211 01:14:12.112710    3470 log.go:172] (0xc000952bb0) (0xc00091a1e0) Create stream\nI0211 01:14:12.112895    3470 log.go:172] (0xc000952bb0) (0xc00091a1e0) Stream added, broadcasting: 1\nI0211 01:14:12.118488    3470 log.go:172] (0xc000952bb0) Reply frame received for 1\nI0211 01:14:12.118533    3470 log.go:172] (0xc000952bb0) (0xc000932140) Create stream\nI0211 01:14:12.118543    3470 log.go:172] (0xc000952bb0) (0xc000932140) Stream added, broadcasting: 3\nI0211 01:14:12.119573    3470 log.go:172] (0xc000952bb0) Reply frame received for 3\nI0211 01:14:12.119602    3470 log.go:172] (0xc000952bb0) (0xc00091a640) Create stream\nI0211 01:14:12.119611    3470 log.go:172] (0xc000952bb0) (0xc00091a640) Stream added, broadcasting: 5\nI0211 01:14:12.120924    3470 log.go:172] (0xc000952bb0) Reply frame received for 5\nI0211 01:14:12.212818    3470 log.go:172] (0xc000952bb0) Data frame received for 5\nI0211 01:14:12.212914    3470 log.go:172] (0xc00091a640) (5) Data frame handling\nI0211 01:14:12.212962    3470 log.go:172] (0xc00091a640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:14:12.240864    3470 log.go:172] (0xc000952bb0) Data frame received for 3\nI0211 01:14:12.240908    3470 log.go:172] (0xc000932140) (3) Data frame handling\nI0211 01:14:12.240958    3470 log.go:172] (0xc000932140) (3) Data frame sent\nI0211 01:14:12.313595    3470 log.go:172] (0xc000952bb0) (0xc000932140) Stream removed, broadcasting: 3\nI0211 01:14:12.313723    3470 log.go:172] (0xc000952bb0) Data frame received for 1\nI0211 01:14:12.313820    3470 log.go:172] (0xc000952bb0) (0xc00091a640) Stream removed, broadcasting: 5\nI0211 01:14:12.313916    3470 log.go:172] (0xc00091a1e0) (1) Data frame handling\nI0211 01:14:12.313947    3470 log.go:172] (0xc00091a1e0) (1) Data frame sent\nI0211 01:14:12.313962    3470 log.go:172] (0xc000952bb0) (0xc00091a1e0) Stream removed, broadcasting: 1\nI0211 01:14:12.313984    3470 log.go:172] (0xc000952bb0) Go away received\nI0211 01:14:12.315145    3470 log.go:172] (0xc000952bb0) (0xc00091a1e0) Stream removed, broadcasting: 1\nI0211 01:14:12.315184    3470 log.go:172] (0xc000952bb0) (0xc000932140) Stream removed, broadcasting: 3\nI0211 01:14:12.315202    3470 log.go:172] (0xc000952bb0) (0xc00091a640) Stream removed, broadcasting: 5\n"
Feb 11 01:14:12.329: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:14:12.329: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:14:12.338: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 11 01:14:22.352: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:14:22.352: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:14:22.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999661s
Feb 11 01:14:23.390: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985066476s
Feb 11 01:14:24.396: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.976750045s
Feb 11 01:14:25.404: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971217995s
Feb 11 01:14:26.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962718732s
Feb 11 01:14:27.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.953699553s
Feb 11 01:14:28.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.939814527s
Feb 11 01:14:29.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.932118099s
Feb 11 01:14:30.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.923015106s
Feb 11 01:14:31.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 915.39628ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2246
Feb 11 01:14:32.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:14:32.794: INFO: stderr: "I0211 01:14:32.647168    3486 log.go:172] (0xc00056d4a0) (0xc0008ba460) Create stream\nI0211 01:14:32.647271    3486 log.go:172] (0xc00056d4a0) (0xc0008ba460) Stream added, broadcasting: 1\nI0211 01:14:32.657254    3486 log.go:172] (0xc00056d4a0) Reply frame received for 1\nI0211 01:14:32.657286    3486 log.go:172] (0xc00056d4a0) (0xc0006afb80) Create stream\nI0211 01:14:32.657291    3486 log.go:172] (0xc00056d4a0) (0xc0006afb80) Stream added, broadcasting: 3\nI0211 01:14:32.658395    3486 log.go:172] (0xc00056d4a0) Reply frame received for 3\nI0211 01:14:32.658419    3486 log.go:172] (0xc00056d4a0) (0xc0005e4780) Create stream\nI0211 01:14:32.658427    3486 log.go:172] (0xc00056d4a0) (0xc0005e4780) Stream added, broadcasting: 5\nI0211 01:14:32.659378    3486 log.go:172] (0xc00056d4a0) Reply frame received for 5\nI0211 01:14:32.725278    3486 log.go:172] (0xc00056d4a0) Data frame received for 5\nI0211 01:14:32.725559    3486 log.go:172] (0xc0005e4780) (5) Data frame handling\nI0211 01:14:32.725589    3486 log.go:172] (0xc0005e4780) (5) Data frame sent\nI0211 01:14:32.725714    3486 log.go:172] (0xc00056d4a0) Data frame received for 3\nI0211 01:14:32.725748    3486 log.go:172] (0xc0006afb80) (3) Data frame handling\nI0211 01:14:32.725767    3486 log.go:172] (0xc0006afb80) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:14:32.783009    3486 log.go:172] (0xc00056d4a0) Data frame received for 1\nI0211 01:14:32.783067    3486 log.go:172] (0xc0008ba460) (1) Data frame handling\nI0211 01:14:32.783087    3486 log.go:172] (0xc0008ba460) (1) Data frame sent\nI0211 01:14:32.783113    3486 log.go:172] (0xc00056d4a0) (0xc0008ba460) Stream removed, broadcasting: 1\nI0211 01:14:32.784011    3486 log.go:172] (0xc00056d4a0) (0xc0006afb80) Stream removed, broadcasting: 3\nI0211 01:14:32.784102    3486 log.go:172] (0xc00056d4a0) (0xc0005e4780) Stream removed, broadcasting: 5\nI0211 01:14:32.784188    3486 log.go:172] (0xc00056d4a0) (0xc0008ba460) Stream removed, broadcasting: 1\nI0211 01:14:32.784207    3486 log.go:172] (0xc00056d4a0) (0xc0006afb80) Stream removed, broadcasting: 3\nI0211 01:14:32.784216    3486 log.go:172] (0xc00056d4a0) (0xc0005e4780) Stream removed, broadcasting: 5\n"
Feb 11 01:14:32.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:14:32.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:14:32.798: INFO: Found 1 stateful pods, waiting for 3
Feb 11 01:14:42.806: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:14:42.806: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:14:42.806: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 11 01:14:52.806: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:14:52.807: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 11 01:14:52.807: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 11 01:14:52.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:14:53.223: INFO: stderr: "I0211 01:14:53.048666    3503 log.go:172] (0xc0001156b0) (0xc000649f40) Create stream\nI0211 01:14:53.049075    3503 log.go:172] (0xc0001156b0) (0xc000649f40) Stream added, broadcasting: 1\nI0211 01:14:53.055326    3503 log.go:172] (0xc0001156b0) Reply frame received for 1\nI0211 01:14:53.055455    3503 log.go:172] (0xc0001156b0) (0xc000618820) Create stream\nI0211 01:14:53.055479    3503 log.go:172] (0xc0001156b0) (0xc000618820) Stream added, broadcasting: 3\nI0211 01:14:53.057820    3503 log.go:172] (0xc0001156b0) Reply frame received for 3\nI0211 01:14:53.057886    3503 log.go:172] (0xc0001156b0) (0xc0005b5360) Create stream\nI0211 01:14:53.057896    3503 log.go:172] (0xc0001156b0) (0xc0005b5360) Stream added, broadcasting: 5\nI0211 01:14:53.059991    3503 log.go:172] (0xc0001156b0) Reply frame received for 5\nI0211 01:14:53.148858    3503 log.go:172] (0xc0001156b0) Data frame received for 3\nI0211 01:14:53.148961    3503 log.go:172] (0xc000618820) (3) Data frame handling\nI0211 01:14:53.149003    3503 log.go:172] (0xc000618820) (3) Data frame sent\nI0211 01:14:53.149049    3503 log.go:172] (0xc0001156b0) Data frame received for 5\nI0211 01:14:53.149073    3503 log.go:172] (0xc0005b5360) (5) Data frame handling\nI0211 01:14:53.149089    3503 log.go:172] (0xc0005b5360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:14:53.214472    3503 log.go:172] (0xc0001156b0) Data frame received for 1\nI0211 01:14:53.214539    3503 log.go:172] (0xc0001156b0) (0xc0005b5360) Stream removed, broadcasting: 5\nI0211 01:14:53.214589    3503 log.go:172] (0xc0001156b0) (0xc000618820) Stream removed, broadcasting: 3\nI0211 01:14:53.214632    3503 log.go:172] (0xc000649f40) (1) Data frame handling\nI0211 01:14:53.214680    3503 log.go:172] (0xc000649f40) (1) Data frame sent\nI0211 01:14:53.214691    3503 log.go:172] (0xc0001156b0) (0xc000649f40) Stream removed, broadcasting: 1\nI0211 01:14:53.214713    3503 log.go:172] (0xc0001156b0) Go away received\nI0211 01:14:53.215458    3503 log.go:172] (0xc0001156b0) (0xc000649f40) Stream removed, broadcasting: 1\nI0211 01:14:53.215480    3503 log.go:172] (0xc0001156b0) (0xc000618820) Stream removed, broadcasting: 3\nI0211 01:14:53.215485    3503 log.go:172] (0xc0001156b0) (0xc0005b5360) Stream removed, broadcasting: 5\n"
Feb 11 01:14:53.223: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:14:53.223: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:14:53.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:14:53.672: INFO: stderr: "I0211 01:14:53.441659    3524 log.go:172] (0xc000a84000) (0xc000a66000) Create stream\nI0211 01:14:53.441874    3524 log.go:172] (0xc000a84000) (0xc000a66000) Stream added, broadcasting: 1\nI0211 01:14:53.446130    3524 log.go:172] (0xc000a84000) Reply frame received for 1\nI0211 01:14:53.446190    3524 log.go:172] (0xc000a84000) (0xc000b3c000) Create stream\nI0211 01:14:53.446216    3524 log.go:172] (0xc000a84000) (0xc000b3c000) Stream added, broadcasting: 3\nI0211 01:14:53.447611    3524 log.go:172] (0xc000a84000) Reply frame received for 3\nI0211 01:14:53.447638    3524 log.go:172] (0xc000a84000) (0xc000b3c140) Create stream\nI0211 01:14:53.447650    3524 log.go:172] (0xc000a84000) (0xc000b3c140) Stream added, broadcasting: 5\nI0211 01:14:53.448523    3524 log.go:172] (0xc000a84000) Reply frame received for 5\nI0211 01:14:53.518998    3524 log.go:172] (0xc000a84000) Data frame received for 5\nI0211 01:14:53.519032    3524 log.go:172] (0xc000b3c140) (5) Data frame handling\nI0211 01:14:53.519051    3524 log.go:172] (0xc000b3c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:14:53.545796    3524 log.go:172] (0xc000a84000) Data frame received for 3\nI0211 01:14:53.545828    3524 log.go:172] (0xc000b3c000) (3) Data frame handling\nI0211 01:14:53.545847    3524 log.go:172] (0xc000b3c000) (3) Data frame sent\nI0211 01:14:53.648050    3524 log.go:172] (0xc000a84000) (0xc000b3c000) Stream removed, broadcasting: 3\nI0211 01:14:53.648538    3524 log.go:172] (0xc000a84000) Data frame received for 1\nI0211 01:14:53.648585    3524 log.go:172] (0xc000a66000) (1) Data frame handling\nI0211 01:14:53.648626    3524 log.go:172] (0xc000a66000) (1) Data frame sent\nI0211 01:14:53.648648    3524 log.go:172] (0xc000a84000) (0xc000a66000) Stream removed, broadcasting: 1\nI0211 01:14:53.648899    3524 log.go:172] (0xc000a84000) (0xc000b3c140) Stream removed, broadcasting: 5\nI0211 01:14:53.649211    3524 log.go:172] (0xc000a84000) Go away received\nI0211 01:14:53.650371    3524 log.go:172] (0xc000a84000) (0xc000a66000) Stream removed, broadcasting: 1\nI0211 01:14:53.650411    3524 log.go:172] (0xc000a84000) (0xc000b3c000) Stream removed, broadcasting: 3\nI0211 01:14:53.650420    3524 log.go:172] (0xc000a84000) (0xc000b3c140) Stream removed, broadcasting: 5\n"
Feb 11 01:14:53.672: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:14:53.672: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:14:53.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 11 01:14:54.156: INFO: stderr: "I0211 01:14:53.907035    3544 log.go:172] (0xc00094e580) (0xc000946140) Create stream\nI0211 01:14:53.907643    3544 log.go:172] (0xc00094e580) (0xc000946140) Stream added, broadcasting: 1\nI0211 01:14:53.915795    3544 log.go:172] (0xc00094e580) Reply frame received for 1\nI0211 01:14:53.915926    3544 log.go:172] (0xc00094e580) (0xc00098a3c0) Create stream\nI0211 01:14:53.915949    3544 log.go:172] (0xc00094e580) (0xc00098a3c0) Stream added, broadcasting: 3\nI0211 01:14:53.918002    3544 log.go:172] (0xc00094e580) Reply frame received for 3\nI0211 01:14:53.918060    3544 log.go:172] (0xc00094e580) (0xc0009461e0) Create stream\nI0211 01:14:53.918073    3544 log.go:172] (0xc00094e580) (0xc0009461e0) Stream added, broadcasting: 5\nI0211 01:14:53.920308    3544 log.go:172] (0xc00094e580) Reply frame received for 5\nI0211 01:14:54.005172    3544 log.go:172] (0xc00094e580) Data frame received for 5\nI0211 01:14:54.005422    3544 log.go:172] (0xc0009461e0) (5) Data frame handling\nI0211 01:14:54.005489    3544 log.go:172] (0xc0009461e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0211 01:14:54.033158    3544 log.go:172] (0xc00094e580) Data frame received for 3\nI0211 01:14:54.033350    3544 log.go:172] (0xc00098a3c0) (3) Data frame handling\nI0211 01:14:54.033393    3544 log.go:172] (0xc00098a3c0) (3) Data frame sent\nI0211 01:14:54.141262    3544 log.go:172] (0xc00094e580) Data frame received for 1\nI0211 01:14:54.141641    3544 log.go:172] (0xc00094e580) (0xc00098a3c0) Stream removed, broadcasting: 3\nI0211 01:14:54.141946    3544 log.go:172] (0xc00094e580) (0xc0009461e0) Stream removed, broadcasting: 5\nI0211 01:14:54.141998    3544 log.go:172] (0xc000946140) (1) Data frame handling\nI0211 01:14:54.142016    3544 log.go:172] (0xc000946140) (1) Data frame sent\nI0211 01:14:54.142021    3544 log.go:172] (0xc00094e580) (0xc000946140) Stream removed, broadcasting: 1\nI0211 01:14:54.142039    3544 log.go:172] (0xc00094e580) Go away received\nI0211 01:14:54.143636    3544 log.go:172] (0xc00094e580) (0xc000946140) Stream removed, broadcasting: 1\nI0211 01:14:54.143673    3544 log.go:172] (0xc00094e580) (0xc00098a3c0) Stream removed, broadcasting: 3\nI0211 01:14:54.143688    3544 log.go:172] (0xc00094e580) (0xc0009461e0) Stream removed, broadcasting: 5\n"
Feb 11 01:14:54.156: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 11 01:14:54.156: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 11 01:14:54.156: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:14:54.161: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 11 01:15:04.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:15:04.174: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:15:04.174: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 11 01:15:04.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999124s
Feb 11 01:15:05.204: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993225604s
Feb 11 01:15:06.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982632363s
Feb 11 01:15:07.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961776757s
Feb 11 01:15:08.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.954011253s
Feb 11 01:15:09.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.945071324s
Feb 11 01:15:10.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.598005563s
Feb 11 01:15:11.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.58806957s
Feb 11 01:15:12.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.580661614s
Feb 11 01:15:13.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 522.198871ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2246
Feb 11 01:15:14.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:15:15.133: INFO: stderr: "I0211 01:15:14.897245    3563 log.go:172] (0xc000b42d10) (0xc0006e2140) Create stream\nI0211 01:15:14.898355    3563 log.go:172] (0xc000b42d10) (0xc0006e2140) Stream added, broadcasting: 1\nI0211 01:15:14.906675    3563 log.go:172] (0xc000b42d10) Reply frame received for 1\nI0211 01:15:14.906744    3563 log.go:172] (0xc000b42d10) (0xc0006a41e0) Create stream\nI0211 01:15:14.906766    3563 log.go:172] (0xc000b42d10) (0xc0006a41e0) Stream added, broadcasting: 3\nI0211 01:15:14.908558    3563 log.go:172] (0xc000b42d10) Reply frame received for 3\nI0211 01:15:14.908604    3563 log.go:172] (0xc000b42d10) (0xc0006e2280) Create stream\nI0211 01:15:14.908623    3563 log.go:172] (0xc000b42d10) (0xc0006e2280) Stream added, broadcasting: 5\nI0211 01:15:14.910696    3563 log.go:172] (0xc000b42d10) Reply frame received for 5\nI0211 01:15:15.019572    3563 log.go:172] (0xc000b42d10) Data frame received for 5\nI0211 01:15:15.020108    3563 log.go:172] (0xc0006e2280) (5) Data frame handling\nI0211 01:15:15.020150    3563 log.go:172] (0xc0006e2280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:15:15.020256    3563 log.go:172] (0xc000b42d10) Data frame received for 3\nI0211 01:15:15.020282    3563 log.go:172] (0xc0006a41e0) (3) Data frame handling\nI0211 01:15:15.020304    3563 log.go:172] (0xc0006a41e0) (3) Data frame sent\nI0211 01:15:15.120465    3563 log.go:172] (0xc000b42d10) (0xc0006a41e0) Stream removed, broadcasting: 3\nI0211 01:15:15.120615    3563 log.go:172] (0xc000b42d10) Data frame received for 1\nI0211 01:15:15.120659    3563 log.go:172] (0xc000b42d10) (0xc0006e2280) Stream removed, broadcasting: 5\nI0211 01:15:15.120708    3563 log.go:172] (0xc0006e2140) (1) Data frame handling\nI0211 01:15:15.120719    3563 log.go:172] (0xc0006e2140) (1) Data frame sent\nI0211 01:15:15.120724    3563 log.go:172] (0xc000b42d10) (0xc0006e2140) Stream removed, broadcasting: 1\nI0211 01:15:15.120735    3563 log.go:172] (0xc000b42d10) Go away received\nI0211 01:15:15.121771    3563 log.go:172] (0xc000b42d10) (0xc0006e2140) Stream removed, broadcasting: 1\nI0211 01:15:15.121791    3563 log.go:172] (0xc000b42d10) (0xc0006a41e0) Stream removed, broadcasting: 3\nI0211 01:15:15.121806    3563 log.go:172] (0xc000b42d10) (0xc0006e2280) Stream removed, broadcasting: 5\n"
Feb 11 01:15:15.133: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:15:15.133: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:15:15.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:15:15.509: INFO: stderr: "I0211 01:15:15.335044    3582 log.go:172] (0xc000b52fd0) (0xc000a54780) Create stream\nI0211 01:15:15.335254    3582 log.go:172] (0xc000b52fd0) (0xc000a54780) Stream added, broadcasting: 1\nI0211 01:15:15.339758    3582 log.go:172] (0xc000b52fd0) Reply frame received for 1\nI0211 01:15:15.339790    3582 log.go:172] (0xc000b52fd0) (0xc000a7a140) Create stream\nI0211 01:15:15.339799    3582 log.go:172] (0xc000b52fd0) (0xc000a7a140) Stream added, broadcasting: 3\nI0211 01:15:15.340764    3582 log.go:172] (0xc000b52fd0) Reply frame received for 3\nI0211 01:15:15.340783    3582 log.go:172] (0xc000b52fd0) (0xc000a54820) Create stream\nI0211 01:15:15.340790    3582 log.go:172] (0xc000b52fd0) (0xc000a54820) Stream added, broadcasting: 5\nI0211 01:15:15.341641    3582 log.go:172] (0xc000b52fd0) Reply frame received for 5\nI0211 01:15:15.408092    3582 log.go:172] (0xc000b52fd0) Data frame received for 5\nI0211 01:15:15.408162    3582 log.go:172] (0xc000a54820) (5) Data frame handling\nI0211 01:15:15.408182    3582 log.go:172] (0xc000a54820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:15:15.409752    3582 log.go:172] (0xc000b52fd0) Data frame received for 3\nI0211 01:15:15.409775    3582 log.go:172] (0xc000a7a140) (3) Data frame handling\nI0211 01:15:15.409787    3582 log.go:172] (0xc000a7a140) (3) Data frame sent\nI0211 01:15:15.501939    3582 log.go:172] (0xc000b52fd0) (0xc000a54820) Stream removed, broadcasting: 5\nI0211 01:15:15.502026    3582 log.go:172] (0xc000b52fd0) Data frame received for 1\nI0211 01:15:15.502062    3582 log.go:172] (0xc000b52fd0) (0xc000a7a140) Stream removed, broadcasting: 3\nI0211 01:15:15.502089    3582 log.go:172] (0xc000a54780) (1) Data frame handling\nI0211 01:15:15.502103    3582 log.go:172] (0xc000a54780) (1) Data frame sent\nI0211 01:15:15.502111    3582 log.go:172] (0xc000b52fd0) (0xc000a54780) Stream removed, broadcasting: 1\nI0211 01:15:15.502121    3582 log.go:172] (0xc000b52fd0) Go away received\nI0211 01:15:15.502716    3582 log.go:172] (0xc000b52fd0) (0xc000a54780) Stream removed, broadcasting: 1\nI0211 01:15:15.502727    3582 log.go:172] (0xc000b52fd0) (0xc000a7a140) Stream removed, broadcasting: 3\nI0211 01:15:15.502731    3582 log.go:172] (0xc000b52fd0) (0xc000a54820) Stream removed, broadcasting: 5\n"
Feb 11 01:15:15.509: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:15:15.509: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:15:15.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2246 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 11 01:15:15.905: INFO: stderr: "I0211 01:15:15.677643    3602 log.go:172] (0xc00052a000) (0xc0004f15e0) Create stream\nI0211 01:15:15.677815    3602 log.go:172] (0xc00052a000) (0xc0004f15e0) Stream added, broadcasting: 1\nI0211 01:15:15.682467    3602 log.go:172] (0xc00052a000) Reply frame received for 1\nI0211 01:15:15.682534    3602 log.go:172] (0xc00052a000) (0xc000524000) Create stream\nI0211 01:15:15.682620    3602 log.go:172] (0xc00052a000) (0xc000524000) Stream added, broadcasting: 3\nI0211 01:15:15.684240    3602 log.go:172] (0xc00052a000) Reply frame received for 3\nI0211 01:15:15.684318    3602 log.go:172] (0xc00052a000) (0xc00091e000) Create stream\nI0211 01:15:15.684332    3602 log.go:172] (0xc00052a000) (0xc00091e000) Stream added, broadcasting: 5\nI0211 01:15:15.686043    3602 log.go:172] (0xc00052a000) Reply frame received for 5\nI0211 01:15:15.770266    3602 log.go:172] (0xc00052a000) Data frame received for 5\nI0211 01:15:15.770392    3602 log.go:172] (0xc00091e000) (5) Data frame handling\nI0211 01:15:15.770438    3602 log.go:172] (0xc00091e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0211 01:15:15.773902    3602 log.go:172] (0xc00052a000) Data frame received for 3\nI0211 01:15:15.773928    3602 log.go:172] (0xc000524000) (3) Data frame handling\nI0211 01:15:15.773954    3602 log.go:172] (0xc000524000) (3) Data frame sent\nI0211 01:15:15.892289    3602 log.go:172] (0xc00052a000) Data frame received for 1\nI0211 01:15:15.892386    3602 log.go:172] (0xc0004f15e0) (1) Data frame handling\nI0211 01:15:15.892422    3602 log.go:172] (0xc0004f15e0) (1) Data frame sent\nI0211 01:15:15.892528    3602 log.go:172] (0xc00052a000) (0xc0004f15e0) Stream removed, broadcasting: 1\nI0211 01:15:15.893809    3602 log.go:172] (0xc00052a000) (0xc000524000) Stream removed, broadcasting: 3\nI0211 01:15:15.894298    3602 log.go:172] (0xc00052a000) (0xc00091e000) Stream removed, broadcasting: 5\nI0211 01:15:15.894411    3602 log.go:172] (0xc00052a000) (0xc0004f15e0) Stream removed, broadcasting: 1\nI0211 01:15:15.894437    3602 log.go:172] (0xc00052a000) (0xc000524000) Stream removed, broadcasting: 3\nI0211 01:15:15.894445    3602 log.go:172] (0xc00052a000) (0xc00091e000) Stream removed, broadcasting: 5\nI0211 01:15:15.894987    3602 log.go:172] (0xc00052a000) Go away received\n"
Feb 11 01:15:15.905: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 11 01:15:15.905: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 11 01:15:15.905: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 11 01:15:35.943: INFO: Deleting all statefulset in ns statefulset-2246
Feb 11 01:15:35.948: INFO: Scaling statefulset ss to 0
Feb 11 01:15:35.981: INFO: Waiting for statefulset status.replicas updated to 0
Feb 11 01:15:35.984: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:15:36.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2246" for this suite.

• [SLOW TEST:94.623 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":232,"skipped":3811,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:15:36.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0211 01:15:37.265928       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 11 01:15:37.266: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:15:37.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4078" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":233,"skipped":3821,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:15:37.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-3475
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 11 01:15:37.422: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 11 01:15:37.550: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:39.906: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:41.616: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:43.579: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:46.009: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:48.026: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:49.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 11 01:15:51.556: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:15:53.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:15:55.557: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:15:57.559: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:15:59.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:16:01.556: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:16:03.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:16:05.568: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:16:07.558: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 11 01:16:09.558: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 11 01:16:09.568: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 11 01:16:17.613: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-3475 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 01:16:17.613: INFO: >>> kubeConfig: /root/.kube/config
I0211 01:16:17.664324       9 log.go:172] (0xc002310580) (0xc0014af5e0) Create stream
I0211 01:16:17.664429       9 log.go:172] (0xc002310580) (0xc0014af5e0) Stream added, broadcasting: 1
I0211 01:16:17.670007       9 log.go:172] (0xc002310580) Reply frame received for 1
I0211 01:16:17.670069       9 log.go:172] (0xc002310580) (0xc000cd4640) Create stream
I0211 01:16:17.670083       9 log.go:172] (0xc002310580) (0xc000cd4640) Stream added, broadcasting: 3
I0211 01:16:17.671568       9 log.go:172] (0xc002310580) Reply frame received for 3
I0211 01:16:17.671602       9 log.go:172] (0xc002310580) (0xc000b5d040) Create stream
I0211 01:16:17.671614       9 log.go:172] (0xc002310580) (0xc000b5d040) Stream added, broadcasting: 5
I0211 01:16:17.675892       9 log.go:172] (0xc002310580) Reply frame received for 5
I0211 01:16:17.758926       9 log.go:172] (0xc002310580) Data frame received for 3
I0211 01:16:17.759102       9 log.go:172] (0xc000cd4640) (3) Data frame handling
I0211 01:16:17.759126       9 log.go:172] (0xc000cd4640) (3) Data frame sent
I0211 01:16:17.820853       9 log.go:172] (0xc002310580) Data frame received for 1
I0211 01:16:17.821088       9 log.go:172] (0xc002310580) (0xc000cd4640) Stream removed, broadcasting: 3
I0211 01:16:17.821412       9 log.go:172] (0xc0014af5e0) (1) Data frame handling
I0211 01:16:17.821439       9 log.go:172] (0xc0014af5e0) (1) Data frame sent
I0211 01:16:17.821464       9 log.go:172] (0xc002310580) (0xc0014af5e0) Stream removed, broadcasting: 1
I0211 01:16:17.821767       9 log.go:172] (0xc002310580) (0xc000b5d040) Stream removed, broadcasting: 5
I0211 01:16:17.821792       9 log.go:172] (0xc002310580) (0xc0014af5e0) Stream removed, broadcasting: 1
I0211 01:16:17.821806       9 log.go:172] (0xc002310580) (0xc000cd4640) Stream removed, broadcasting: 3
I0211 01:16:17.821810       9 log.go:172] (0xc002310580) (0xc000b5d040) Stream removed, broadcasting: 5
I0211 01:16:17.822149       9 log.go:172] (0xc002310580) Go away received
Feb 11 01:16:17.822: INFO: Waiting for responses: map[]
Feb 11 01:16:17.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3475 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 01:16:17.825: INFO: >>> kubeConfig: /root/.kube/config
I0211 01:16:17.863913       9 log.go:172] (0xc002c306e0) (0xc000f74780) Create stream
I0211 01:16:17.864224       9 log.go:172] (0xc002c306e0) (0xc000f74780) Stream added, broadcasting: 1
I0211 01:16:17.870108       9 log.go:172] (0xc002c306e0) Reply frame received for 1
I0211 01:16:17.870156       9 log.go:172] (0xc002c306e0) (0xc0014af900) Create stream
I0211 01:16:17.870165       9 log.go:172] (0xc002c306e0) (0xc0014af900) Stream added, broadcasting: 3
I0211 01:16:17.872705       9 log.go:172] (0xc002c306e0) Reply frame received for 3
I0211 01:16:17.872723       9 log.go:172] (0xc002c306e0) (0xc000f74b40) Create stream
I0211 01:16:17.872730       9 log.go:172] (0xc002c306e0) (0xc000f74b40) Stream added, broadcasting: 5
I0211 01:16:17.874460       9 log.go:172] (0xc002c306e0) Reply frame received for 5
I0211 01:16:18.006318       9 log.go:172] (0xc002c306e0) Data frame received for 3
I0211 01:16:18.006542       9 log.go:172] (0xc0014af900) (3) Data frame handling
I0211 01:16:18.006612       9 log.go:172] (0xc0014af900) (3) Data frame sent
I0211 01:16:18.092659       9 log.go:172] (0xc002c306e0) (0xc0014af900) Stream removed, broadcasting: 3
I0211 01:16:18.092850       9 log.go:172] (0xc002c306e0) Data frame received for 1
I0211 01:16:18.092867       9 log.go:172] (0xc000f74780) (1) Data frame handling
I0211 01:16:18.092879       9 log.go:172] (0xc000f74780) (1) Data frame sent
I0211 01:16:18.092888       9 log.go:172] (0xc002c306e0) (0xc000f74780) Stream removed, broadcasting: 1
I0211 01:16:18.093032       9 log.go:172] (0xc002c306e0) (0xc000f74b40) Stream removed, broadcasting: 5
I0211 01:16:18.093060       9 log.go:172] (0xc002c306e0) (0xc000f74780) Stream removed, broadcasting: 1
I0211 01:16:18.093070       9 log.go:172] (0xc002c306e0) (0xc0014af900) Stream removed, broadcasting: 3
I0211 01:16:18.093080       9 log.go:172] (0xc002c306e0) (0xc000f74b40) Stream removed, broadcasting: 5
I0211 01:16:18.093188       9 log.go:172] (0xc002c306e0) Go away received
Feb 11 01:16:18.093: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:16:18.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3475" for this suite.

• [SLOW TEST:40.833 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:16:18.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Feb 11 01:16:18.813: INFO: created pod pod-service-account-defaultsa
Feb 11 01:16:18.813: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 11 01:16:18.883: INFO: created pod pod-service-account-mountsa
Feb 11 01:16:18.883: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 11 01:16:18.988: INFO: created pod pod-service-account-nomountsa
Feb 11 01:16:18.988: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 11 01:16:19.017: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 11 01:16:19.018: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 11 01:16:19.176: INFO: created pod pod-service-account-mountsa-mountspec
Feb 11 01:16:19.177: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 11 01:16:19.194: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 11 01:16:19.194: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 11 01:16:19.371: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 11 01:16:19.371: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 11 01:16:19.399: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 11 01:16:19.400: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 11 01:16:19.554: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 11 01:16:19.554: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:16:19.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6373" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":235,"skipped":3876,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:16:21.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:17:00.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3851" for this suite.

• [SLOW TEST:39.019 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3902,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:17:00.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating secret secrets-956/secret-test-48e0d2f5-dfd6-4596-b1d5-c61c91c3c1d1
STEP: Creating a pod to test consume secrets
Feb 11 01:17:00.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482" in namespace "secrets-956" to be "success or failure"
Feb 11 01:17:00.809: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621483ms
Feb 11 01:17:02.816: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010524081s
Feb 11 01:17:04.825: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019569855s
Feb 11 01:17:06.832: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026396617s
Feb 11 01:17:08.838: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032315044s
STEP: Saw pod success
Feb 11 01:17:08.838: INFO: Pod "pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482" satisfied condition "success or failure"
Feb 11 01:17:08.841: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482 container env-test: 
STEP: delete the pod
Feb 11 01:17:08.952: INFO: Waiting for pod pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482 to disappear
Feb 11 01:17:08.965: INFO: Pod pod-configmaps-4e7d7e66-b10c-46c9-a699-8afca9689482 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:17:08.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-956" for this suite.

• [SLOW TEST:8.258 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3902,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:17:08.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 11 01:17:10.154: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 11 01:17:12.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:14.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:16.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980630, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 01:17:19.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:17:19.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5379" for this suite.
STEP: Destroying namespace "webhook-5379-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.703 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":238,"skipped":3963,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:17:19.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:17:19.734: INFO: Creating deployment "test-recreate-deployment"
Feb 11 01:17:19.742: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 11 01:17:19.803: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 11 01:17:22.552: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 11 01:17:22.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:24.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:26.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:28.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980639, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:17:30.575: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 11 01:17:30.587: INFO: Updating deployment test-recreate-deployment
Feb 11 01:17:30.588: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 11 01:17:30.859: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-8074 /apis/apps/v1/namespaces/deployment-8074/deployments/test-recreate-deployment 5ad0149b-9c2f-4fde-941c-553f8385e352 7653154 2 2020-02-11 01:17:19 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003940c68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-11 01:17:30 +0000 UTC,LastTransitionTime:2020-02-11 01:17:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-11 01:17:30 +0000 UTC,LastTransitionTime:2020-02-11 01:17:19 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 11 01:17:30.942: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-8074 /apis/apps/v1/namespaces/deployment-8074/replicasets/test-recreate-deployment-5f94c574ff cd43441a-11bc-45c5-b2ed-0e78cd9bfa8c 7653151 1 2020-02-11 01:17:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5ad0149b-9c2f-4fde-941c-553f8385e352 0xc003940ff7 0xc003940ff8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003941058  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 11 01:17:30.942: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 11 01:17:30.942: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-8074 /apis/apps/v1/namespaces/deployment-8074/replicasets/test-recreate-deployment-799c574856 7f4474bd-6f76-48d6-a3de-a49494a75ceb 7653141 2 2020-02-11 01:17:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5ad0149b-9c2f-4fde-941c-553f8385e352 0xc0039410c7 0xc0039410c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003941138  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 11 01:17:30.948: INFO: Pod "test-recreate-deployment-5f94c574ff-x96ln" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-x96ln test-recreate-deployment-5f94c574ff- deployment-8074 /api/v1/namespaces/deployment-8074/pods/test-recreate-deployment-5f94c574ff-x96ln 404c0b36-5d82-4421-9262-0f6aa197d8ba 7653155 0 2020-02-11 01:17:30 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff cd43441a-11bc-45c5-b2ed-0e78cd9bfa8c 0xc003941587 0xc003941588}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pf5x7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pf5x7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pf5x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:17:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:17:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:17:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:17:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-11 01:17:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:17:30.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8074" for this suite.

• [SLOW TEST:11.315 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":239,"skipped":3986,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:17:30.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-gzz7
STEP: Creating a pod to test atomic-volume-subpath
Feb 11 01:17:31.287: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gzz7" in namespace "subpath-9448" to be "success or failure"
Feb 11 01:17:31.304: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.589109ms
Feb 11 01:17:33.312: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024101703s
Feb 11 01:17:35.317: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029038775s
Feb 11 01:17:37.323: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03576559s
Feb 11 01:17:39.337: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049195777s
Feb 11 01:17:41.342: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054791791s
Feb 11 01:17:43.348: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 12.060860602s
Feb 11 01:17:45.354: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 14.066919039s
Feb 11 01:17:47.373: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 16.085152696s
Feb 11 01:17:49.387: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 18.099489859s
Feb 11 01:17:51.431: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 20.143857225s
Feb 11 01:17:53.439: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 22.151083693s
Feb 11 01:17:55.445: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 24.157566285s
Feb 11 01:17:57.454: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 26.166135651s
Feb 11 01:17:59.460: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 28.172797417s
Feb 11 01:18:01.540: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Running", Reason="", readiness=true. Elapsed: 30.252931594s
Feb 11 01:18:03.550: INFO: Pod "pod-subpath-test-configmap-gzz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.262355313s
STEP: Saw pod success
Feb 11 01:18:03.550: INFO: Pod "pod-subpath-test-configmap-gzz7" satisfied condition "success or failure"
Feb 11 01:18:03.565: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-gzz7 container test-container-subpath-configmap-gzz7: 
STEP: delete the pod
Feb 11 01:18:03.657: INFO: Waiting for pod pod-subpath-test-configmap-gzz7 to disappear
Feb 11 01:18:03.671: INFO: Pod pod-subpath-test-configmap-gzz7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gzz7
Feb 11 01:18:03.671: INFO: Deleting pod "pod-subpath-test-configmap-gzz7" in namespace "subpath-9448"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:03.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9448" for this suite.

• [SLOW TEST:32.711 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":240,"skipped":3987,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:03.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 11 01:18:04.521: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 11 01:18:06.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:18:08.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:18:10.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980684, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 11 01:18:13.672: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:18:13.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:15.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2619" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.619 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":241,"skipped":3989,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:15.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-7548/configmap-test-c0843c93-e314-4062-9a80-6092526c19e5
STEP: Creating a pod to test consume configMaps
Feb 11 01:18:15.463: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975" in namespace "configmap-7548" to be "success or failure"
Feb 11 01:18:15.471: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Pending", Reason="", readiness=false. Elapsed: 7.507476ms
Feb 11 01:18:17.483: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019287994s
Feb 11 01:18:19.504: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040435651s
Feb 11 01:18:21.514: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05072731s
Feb 11 01:18:23.525: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061950126s
Feb 11 01:18:25.536: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072102066s
STEP: Saw pod success
Feb 11 01:18:25.536: INFO: Pod "pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975" satisfied condition "success or failure"
Feb 11 01:18:25.541: INFO: Trying to get logs from node jerma-node pod pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975 container env-test: 
STEP: delete the pod
Feb 11 01:18:25.619: INFO: Waiting for pod pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975 to disappear
Feb 11 01:18:25.623: INFO: Pod pod-configmaps-bd75a79d-ae66-4736-a883-30a69445e975 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:25.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7548" for this suite.

• [SLOW TEST:10.310 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3992,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:25.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-221857d1-3abd-42b2-bded-606d197f4706
STEP: Creating a pod to test consume configMaps
Feb 11 01:18:25.793: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f" in namespace "projected-8435" to be "success or failure"
Feb 11 01:18:25.844: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.921596ms
Feb 11 01:18:27.852: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059055145s
Feb 11 01:18:29.862: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068161642s
Feb 11 01:18:31.873: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079129306s
Feb 11 01:18:33.889: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09544871s
STEP: Saw pod success
Feb 11 01:18:33.889: INFO: Pod "pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f" satisfied condition "success or failure"
Feb 11 01:18:33.895: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 11 01:18:34.384: INFO: Waiting for pod pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f to disappear
Feb 11 01:18:34.394: INFO: Pod pod-projected-configmaps-49fd466f-9817-4117-9577-48547b62143f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:34.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8435" for this suite.

• [SLOW TEST:8.933 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":243,"skipped":4013,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:34.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 11 01:18:42.761: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9435 PodName:pod-sharedvolume-302158d3-044a-44c5-966b-58fe71b87c88 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 11 01:18:42.761: INFO: >>> kubeConfig: /root/.kube/config
I0211 01:18:42.801830       9 log.go:172] (0xc002dd8d10) (0xc001a25c20) Create stream
I0211 01:18:42.801971       9 log.go:172] (0xc002dd8d10) (0xc001a25c20) Stream added, broadcasting: 1
I0211 01:18:42.806849       9 log.go:172] (0xc002dd8d10) Reply frame received for 1
I0211 01:18:42.806894       9 log.go:172] (0xc002dd8d10) (0xc0011832c0) Create stream
I0211 01:18:42.806908       9 log.go:172] (0xc002dd8d10) (0xc0011832c0) Stream added, broadcasting: 3
I0211 01:18:42.807927       9 log.go:172] (0xc002dd8d10) Reply frame received for 3
I0211 01:18:42.807945       9 log.go:172] (0xc002dd8d10) (0xc002251900) Create stream
I0211 01:18:42.807953       9 log.go:172] (0xc002dd8d10) (0xc002251900) Stream added, broadcasting: 5
I0211 01:18:42.808983       9 log.go:172] (0xc002dd8d10) Reply frame received for 5
I0211 01:18:42.927192       9 log.go:172] (0xc002dd8d10) Data frame received for 3
I0211 01:18:42.927287       9 log.go:172] (0xc0011832c0) (3) Data frame handling
I0211 01:18:42.927325       9 log.go:172] (0xc0011832c0) (3) Data frame sent
I0211 01:18:43.018890       9 log.go:172] (0xc002dd8d10) (0xc002251900) Stream removed, broadcasting: 5
I0211 01:18:43.019005       9 log.go:172] (0xc002dd8d10) Data frame received for 1
I0211 01:18:43.019028       9 log.go:172] (0xc002dd8d10) (0xc0011832c0) Stream removed, broadcasting: 3
I0211 01:18:43.019066       9 log.go:172] (0xc001a25c20) (1) Data frame handling
I0211 01:18:43.019082       9 log.go:172] (0xc001a25c20) (1) Data frame sent
I0211 01:18:43.019090       9 log.go:172] (0xc002dd8d10) (0xc001a25c20) Stream removed, broadcasting: 1
I0211 01:18:43.019101       9 log.go:172] (0xc002dd8d10) Go away received
I0211 01:18:43.019308       9 log.go:172] (0xc002dd8d10) (0xc001a25c20) Stream removed, broadcasting: 1
I0211 01:18:43.019324       9 log.go:172] (0xc002dd8d10) (0xc0011832c0) Stream removed, broadcasting: 3
I0211 01:18:43.019336       9 log.go:172] (0xc002dd8d10) (0xc002251900) Stream removed, broadcasting: 5
Feb 11 01:18:43.019: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:43.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9435" for this suite.

• [SLOW TEST:8.457 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":244,"skipped":4024,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:43.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:18:43.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11" in namespace "downward-api-2353" to be "success or failure"
Feb 11 01:18:43.146: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11": Phase="Pending", Reason="", readiness=false. Elapsed: 9.92653ms
Feb 11 01:18:45.155: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018756788s
Feb 11 01:18:47.164: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027664362s
Feb 11 01:18:49.174: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037638532s
Feb 11 01:18:51.182: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045279153s
STEP: Saw pod success
Feb 11 01:18:51.182: INFO: Pod "downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11" satisfied condition "success or failure"
Feb 11 01:18:51.186: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11 container client-container: 
STEP: delete the pod
Feb 11 01:18:51.226: INFO: Waiting for pod downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11 to disappear
Feb 11 01:18:51.236: INFO: Pod downwardapi-volume-2d42841b-3d60-475a-b13f-30e96227dd11 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:18:51.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2353" for this suite.

• [SLOW TEST:8.286 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":245,"skipped":4024,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:18:51.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 11 01:18:51.477: INFO: Waiting up to 5m0s for pod "pod-92db3775-1bed-4496-82ac-c9383844fc65" in namespace "emptydir-5787" to be "success or failure"
Feb 11 01:18:51.489: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 12.158637ms
Feb 11 01:18:53.496: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019132552s
Feb 11 01:18:55.503: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026207557s
Feb 11 01:18:57.565: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087961733s
Feb 11 01:18:59.575: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097581478s
Feb 11 01:19:01.600: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122987199s
STEP: Saw pod success
Feb 11 01:19:01.600: INFO: Pod "pod-92db3775-1bed-4496-82ac-c9383844fc65" satisfied condition "success or failure"
Feb 11 01:19:01.607: INFO: Trying to get logs from node jerma-node pod pod-92db3775-1bed-4496-82ac-c9383844fc65 container test-container: 
STEP: delete the pod
Feb 11 01:19:01.668: INFO: Waiting for pod pod-92db3775-1bed-4496-82ac-c9383844fc65 to disappear
Feb 11 01:19:01.678: INFO: Pod pod-92db3775-1bed-4496-82ac-c9383844fc65 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:01.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5787" for this suite.

• [SLOW TEST:10.373 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":246,"skipped":4042,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:01.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 11 01:19:01.796: INFO: >>> kubeConfig: /root/.kube/config
Feb 11 01:19:04.734: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4145" for this suite.

• [SLOW TEST:13.756 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":247,"skipped":4052,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:15.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Feb 11 01:19:15.559: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:15.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7600" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":248,"skipped":4082,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:15.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:19:15.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 11 01:19:18.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 create -f -'
Feb 11 01:19:22.005: INFO: stderr: ""
Feb 11 01:19:22.005: INFO: stdout: "e2e-test-crd-publish-openapi-6333-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 11 01:19:22.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 delete e2e-test-crd-publish-openapi-6333-crds test-foo'
Feb 11 01:19:22.219: INFO: stderr: ""
Feb 11 01:19:22.220: INFO: stdout: "e2e-test-crd-publish-openapi-6333-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 11 01:19:22.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 apply -f -'
Feb 11 01:19:22.768: INFO: stderr: ""
Feb 11 01:19:22.768: INFO: stdout: "e2e-test-crd-publish-openapi-6333-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 11 01:19:22.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 delete e2e-test-crd-publish-openapi-6333-crds test-foo'
Feb 11 01:19:22.939: INFO: stderr: ""
Feb 11 01:19:22.939: INFO: stdout: "e2e-test-crd-publish-openapi-6333-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 11 01:19:22.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 create -f -'
Feb 11 01:19:23.446: INFO: rc: 1
Feb 11 01:19:23.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 apply -f -'
Feb 11 01:19:23.872: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 11 01:19:23.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 create -f -'
Feb 11 01:19:24.338: INFO: rc: 1
Feb 11 01:19:24.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7520 apply -f -'
Feb 11 01:19:24.587: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 11 01:19:24.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6333-crds'
Feb 11 01:19:24.880: INFO: stderr: ""
Feb 11 01:19:24.880: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6333-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 11 01:19:24.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6333-crds.metadata'
Feb 11 01:19:25.665: INFO: stderr: ""
Feb 11 01:19:25.665: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6333-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 11 01:19:25.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6333-crds.spec'
Feb 11 01:19:26.057: INFO: stderr: ""
Feb 11 01:19:26.058: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6333-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 11 01:19:26.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6333-crds.spec.bars'
Feb 11 01:19:26.519: INFO: stderr: ""
Feb 11 01:19:26.519: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-6333-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 11 01:19:26.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6333-crds.spec.bars2'
Feb 11 01:19:26.958: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:29.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7520" for this suite.

• [SLOW TEST:14.026 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":249,"skipped":4105,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:29.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 11 01:19:29.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0" in namespace "downward-api-3618" to be "success or failure"
Feb 11 01:19:29.883: INFO: Pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.513739ms
Feb 11 01:19:31.895: INFO: Pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040274204s
Feb 11 01:19:33.905: INFO: Pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050508951s
Feb 11 01:19:35.913: INFO: Pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057999995s
STEP: Saw pod success
Feb 11 01:19:35.913: INFO: Pod "downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0" satisfied condition "success or failure"
Feb 11 01:19:35.917: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0 container client-container: 
STEP: delete the pod
Feb 11 01:19:35.984: INFO: Waiting for pod downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0 to disappear
Feb 11 01:19:35.990: INFO: Pod downwardapi-volume-c6ced4e5-307d-4bb9-bb3a-436bf8cd92f0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:35.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3618" for this suite.

• [SLOW TEST:6.291 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":250,"skipped":4112,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:36.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-5c788551-96ce-413b-9505-7087a2eed47f
STEP: Creating a pod to test consume secrets
Feb 11 01:19:36.138: INFO: Waiting up to 5m0s for pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836" in namespace "secrets-5251" to be "success or failure"
Feb 11 01:19:36.142: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836": Phase="Pending", Reason="", readiness=false. Elapsed: 3.638402ms
Feb 11 01:19:38.149: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0108322s
Feb 11 01:19:40.158: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019391514s
Feb 11 01:19:42.166: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027457674s
Feb 11 01:19:44.172: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03332831s
STEP: Saw pod success
Feb 11 01:19:44.172: INFO: Pod "pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836" satisfied condition "success or failure"
Feb 11 01:19:44.179: INFO: Trying to get logs from node jerma-node pod pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836 container secret-volume-test: 
STEP: delete the pod
Feb 11 01:19:44.245: INFO: Waiting for pod pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836 to disappear
Feb 11 01:19:44.251: INFO: Pod pod-secrets-a6ce836c-a68c-4174-8511-2f87f2b68836 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:44.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5251" for this suite.

• [SLOW TEST:8.250 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":251,"skipped":4116,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:44.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-3f675d9c-64e0-4cf8-89cd-859efa216153
STEP: Creating a pod to test consume secrets
Feb 11 01:19:44.370: INFO: Waiting up to 5m0s for pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36" in namespace "secrets-9083" to be "success or failure"
Feb 11 01:19:44.386: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36": Phase="Pending", Reason="", readiness=false. Elapsed: 15.753419ms
Feb 11 01:19:46.394: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023180559s
Feb 11 01:19:48.400: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029215226s
Feb 11 01:19:50.408: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037627302s
Feb 11 01:19:52.415: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044382111s
STEP: Saw pod success
Feb 11 01:19:52.415: INFO: Pod "pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36" satisfied condition "success or failure"
Feb 11 01:19:52.421: INFO: Trying to get logs from node jerma-node pod pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36 container secret-volume-test: 
STEP: delete the pod
Feb 11 01:19:52.502: INFO: Waiting for pod pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36 to disappear
Feb 11 01:19:52.524: INFO: Pod pod-secrets-605d3cea-a0eb-4090-a731-10176a6a6a36 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:19:52.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9083" for this suite.

• [SLOW TEST:8.294 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4145,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:19:52.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:20:08.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4686" for this suite.

• [SLOW TEST:16.438 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":253,"skipped":4157,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:20:09.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 11 01:20:09.160: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 11 01:20:09.821: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 11 01:20:12.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:20:14.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:20:16.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:20:18.103: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980809, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:20:21.128: INFO: Waited 947.499791ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:20:21.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2529" for this suite.

• [SLOW TEST:12.939 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":254,"skipped":4159,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:20:21.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-98884f26-8fa6-43c1-8a0d-9558f98a96cc
STEP: Creating a pod to test consume secrets
Feb 11 01:20:22.129: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0" in namespace "projected-3804" to be "success or failure"
Feb 11 01:20:22.208: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Pending", Reason="", readiness=false. Elapsed: 78.43069ms
Feb 11 01:20:24.214: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08504467s
Feb 11 01:20:26.223: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093384014s
Feb 11 01:20:28.230: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10039438s
Feb 11 01:20:30.237: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107596341s
Feb 11 01:20:32.247: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117186741s
STEP: Saw pod success
Feb 11 01:20:32.247: INFO: Pod "pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0" satisfied condition "success or failure"
Feb 11 01:20:32.252: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0 container secret-volume-test: 
STEP: delete the pod
Feb 11 01:20:32.569: INFO: Waiting for pod pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0 to disappear
Feb 11 01:20:32.577: INFO: Pod pod-projected-secrets-a2fd5520-4484-445c-bcb7-a5684227bfe0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:20:32.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3804" for this suite.

• [SLOW TEST:10.646 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4205,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:20:32.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 11 01:20:32.727: INFO: PodSpec: initContainers in spec.initContainers
Feb 11 01:21:31.125: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-041dd126-eee2-4038-bda5-615a86384e5b", GenerateName:"", Namespace:"init-container-9462", SelfLink:"/api/v1/namespaces/init-container-9462/pods/pod-init-041dd126-eee2-4038-bda5-615a86384e5b", UID:"7c639b13-aa31-4e51-933c-2f5f9e2b3c8a", ResourceVersion:"7654247", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716980832, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"727658310"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-r2kkc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005ec4000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2kkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2kkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-r2kkc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037da138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002eb0180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037da280)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037da2a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037da2a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037da2ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980832, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980832, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980832, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980832, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc002ee0060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a540e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a54150)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://0a14dae5949606d899cc58c124977260e923adb5bfed23161743e3b71d3aed3f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ee00e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ee00a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037da32f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:31.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9462" for this suite.

• [SLOW TEST:58.569 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":256,"skipped":4224,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:31.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 11 01:21:31.304: INFO: Waiting up to 5m0s for pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7" in namespace "containers-9022" to be "success or failure"
Feb 11 01:21:31.310: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.487388ms
Feb 11 01:21:33.319: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014752522s
Feb 11 01:21:35.328: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023106296s
Feb 11 01:21:37.335: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030453817s
Feb 11 01:21:39.342: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037110611s
STEP: Saw pod success
Feb 11 01:21:39.342: INFO: Pod "client-containers-f30069c5-f798-43d2-88a6-d228486064f7" satisfied condition "success or failure"
Feb 11 01:21:39.346: INFO: Trying to get logs from node jerma-node pod client-containers-f30069c5-f798-43d2-88a6-d228486064f7 container test-container: 
STEP: delete the pod
Feb 11 01:21:39.652: INFO: Waiting for pod client-containers-f30069c5-f798-43d2-88a6-d228486064f7 to disappear
Feb 11 01:21:39.669: INFO: Pod client-containers-f30069c5-f798-43d2-88a6-d228486064f7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9022" for this suite.

• [SLOW TEST:8.519 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4235,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:39.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:39.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-774" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":258,"skipped":4239,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:39.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Feb 11 01:21:39.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 11 01:21:40.241: INFO: stderr: ""
Feb 11 01:21:40.241: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:40.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1710" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":259,"skipped":4264,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:40.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Feb 11 01:21:40.363: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix789510418/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:40.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7733" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":260,"skipped":4299,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:40.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:21:50.939: INFO: Waiting up to 5m0s for pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13" in namespace "pods-2926" to be "success or failure"
Feb 11 01:21:51.055: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13": Phase="Pending", Reason="", readiness=false. Elapsed: 115.308123ms
Feb 11 01:21:53.062: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122170635s
Feb 11 01:21:55.067: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127614434s
Feb 11 01:21:57.129: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189543067s
Feb 11 01:21:59.136: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.196875922s
STEP: Saw pod success
Feb 11 01:21:59.137: INFO: Pod "client-envvars-151d901b-febc-40d2-97df-b49622e0db13" satisfied condition "success or failure"
Feb 11 01:21:59.140: INFO: Trying to get logs from node jerma-node pod client-envvars-151d901b-febc-40d2-97df-b49622e0db13 container env3cont: 
STEP: delete the pod
Feb 11 01:21:59.172: INFO: Waiting for pod client-envvars-151d901b-febc-40d2-97df-b49622e0db13 to disappear
Feb 11 01:21:59.192: INFO: Pod client-envvars-151d901b-febc-40d2-97df-b49622e0db13 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:59.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2926" for this suite.

• [SLOW TEST:18.671 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":261,"skipped":4331,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:59.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Feb 11 01:21:59.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 11 01:21:59.528: INFO: stderr: ""
Feb 11 01:21:59.528: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:21:59.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1171" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":262,"skipped":4367,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:21:59.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:21:59.666: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 11 01:21:59.689: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 11 01:22:04.805: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 11 01:22:06.829: INFO: Creating deployment "test-rolling-update-deployment"
Feb 11 01:22:06.836: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 11 01:22:06.850: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 11 01:22:08.863: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 11 01:22:08.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980926, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:22:10.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980926, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:22:12.883: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980927, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716980926, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 11 01:22:14.884: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 11 01:22:14.907: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-4662 /apis/apps/v1/namespaces/deployment-4662/deployments/test-rolling-update-deployment b555e5e7-016c-46f2-a5e7-94edf94073e5 7654527 1 2020-02-11 01:22:06 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b2ec88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-11 01:22:07 +0000 UTC,LastTransitionTime:2020-02-11 01:22:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-11 01:22:14 +0000 UTC,LastTransitionTime:2020-02-11 01:22:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 11 01:22:14.914: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-4662 /apis/apps/v1/namespaces/deployment-4662/replicasets/test-rolling-update-deployment-67cf4f6444 f2a10b4e-62ab-4577-89f2-70cbf87657c1 7654516 1 2020-02-11 01:22:06 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b555e5e7-016c-46f2-a5e7-94edf94073e5 0xc004b2f107 0xc004b2f108}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b2f178  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 11 01:22:14.914: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 11 01:22:14.915: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-4662 /apis/apps/v1/namespaces/deployment-4662/replicasets/test-rolling-update-controller 1bf384fc-d0ba-4486-934b-c1749485c62e 7654526 2 2020-02-11 01:21:59 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b555e5e7-016c-46f2-a5e7-94edf94073e5 0xc004b2f037 0xc004b2f038}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004b2f098  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 11 01:22:14.923: INFO: Pod "test-rolling-update-deployment-67cf4f6444-lxb7s" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-lxb7s test-rolling-update-deployment-67cf4f6444- deployment-4662 /api/v1/namespaces/deployment-4662/pods/test-rolling-update-deployment-67cf4f6444-lxb7s fcbb37e9-c62c-4a89-a1ec-5a49e9cb3303 7654515 0 2020-02-11 01:22:06 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 f2a10b4e-62ab-4577-89f2-70cbf87657c1 0xc004b02bf7 0xc004b02bf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7vg8j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7vg8j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7vg8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:22:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:22:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:22:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:22:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-11 01:22:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 01:22:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://338c9522586eb775b4a56718f5a5191b9fc890f318dd79f27bb632933ca23729,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:22:14.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4662" for this suite.

• [SLOW TEST:15.394 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":263,"skipped":4378,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:22:14.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:22:15.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5231
I0211 01:22:15.138098       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5231, replica count: 1
I0211 01:22:16.189231       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:17.189695       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:18.190035       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:19.190451       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:20.191298       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:21.191862       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:22.192260       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:23.193137       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:24.194333       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:25.195209       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0211 01:22:26.195628       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 11 01:22:26.324: INFO: Created: latency-svc-8kdv9
Feb 11 01:22:26.374: INFO: Got endpoints: latency-svc-8kdv9 [78.144834ms]
Feb 11 01:22:26.433: INFO: Created: latency-svc-tj9gm
Feb 11 01:22:26.470: INFO: Created: latency-svc-w7dxh
Feb 11 01:22:26.470: INFO: Got endpoints: latency-svc-tj9gm [94.710042ms]
Feb 11 01:22:26.513: INFO: Got endpoints: latency-svc-w7dxh [138.633507ms]
Feb 11 01:22:26.526: INFO: Created: latency-svc-p7ndn
Feb 11 01:22:26.579: INFO: Got endpoints: latency-svc-p7ndn [204.510086ms]
Feb 11 01:22:26.610: INFO: Created: latency-svc-4scvl
Feb 11 01:22:26.691: INFO: Got endpoints: latency-svc-4scvl [315.235419ms]
Feb 11 01:22:26.702: INFO: Created: latency-svc-tnlc4
Feb 11 01:22:26.722: INFO: Got endpoints: latency-svc-tnlc4 [346.101806ms]
Feb 11 01:22:26.724: INFO: Created: latency-svc-p2zt4
Feb 11 01:22:26.747: INFO: Created: latency-svc-r2s6r
Feb 11 01:22:26.747: INFO: Got endpoints: latency-svc-p2zt4 [372.454213ms]
Feb 11 01:22:26.763: INFO: Got endpoints: latency-svc-r2s6r [387.345734ms]
Feb 11 01:22:26.784: INFO: Created: latency-svc-slhq6
Feb 11 01:22:26.891: INFO: Got endpoints: latency-svc-slhq6 [515.361473ms]
Feb 11 01:22:26.935: INFO: Created: latency-svc-wrwbw
Feb 11 01:22:26.983: INFO: Got endpoints: latency-svc-wrwbw [606.980173ms]
Feb 11 01:22:27.058: INFO: Created: latency-svc-llgps
Feb 11 01:22:27.067: INFO: Got endpoints: latency-svc-llgps [691.499017ms]
Feb 11 01:22:27.088: INFO: Created: latency-svc-rstxs
Feb 11 01:22:27.104: INFO: Got endpoints: latency-svc-rstxs [728.99738ms]
Feb 11 01:22:27.133: INFO: Created: latency-svc-7sbpn
Feb 11 01:22:27.198: INFO: Got endpoints: latency-svc-7sbpn [822.038888ms]
Feb 11 01:22:27.219: INFO: Created: latency-svc-v72jj
Feb 11 01:22:27.545: INFO: Got endpoints: latency-svc-v72jj [1.169676726s]
Feb 11 01:22:27.575: INFO: Created: latency-svc-lmdwk
Feb 11 01:22:27.596: INFO: Created: latency-svc-4w7sm
Feb 11 01:22:27.597: INFO: Got endpoints: latency-svc-lmdwk [1.221308773s]
Feb 11 01:22:27.625: INFO: Got endpoints: latency-svc-4w7sm [1.250170607s]
Feb 11 01:22:27.639: INFO: Created: latency-svc-c8lhs
Feb 11 01:22:27.688: INFO: Got endpoints: latency-svc-c8lhs [1.217639524s]
Feb 11 01:22:27.723: INFO: Created: latency-svc-9b6kd
Feb 11 01:22:27.890: INFO: Got endpoints: latency-svc-9b6kd [1.376836554s]
Feb 11 01:22:27.906: INFO: Created: latency-svc-jh9dd
Feb 11 01:22:27.942: INFO: Got endpoints: latency-svc-jh9dd [1.362023263s]
Feb 11 01:22:28.101: INFO: Created: latency-svc-dkp79
Feb 11 01:22:28.106: INFO: Got endpoints: latency-svc-dkp79 [1.415072301s]
Feb 11 01:22:28.114: INFO: Created: latency-svc-pbnnh
Feb 11 01:22:28.119: INFO: Got endpoints: latency-svc-pbnnh [1.39706394s]
Feb 11 01:22:28.134: INFO: Created: latency-svc-47tk7
Feb 11 01:22:28.137: INFO: Got endpoints: latency-svc-47tk7 [1.389984384s]
Feb 11 01:22:28.190: INFO: Created: latency-svc-sdzhn
Feb 11 01:22:28.256: INFO: Got endpoints: latency-svc-sdzhn [1.492248941s]
Feb 11 01:22:28.269: INFO: Created: latency-svc-tjjjm
Feb 11 01:22:28.273: INFO: Got endpoints: latency-svc-tjjjm [1.381491052s]
Feb 11 01:22:28.323: INFO: Created: latency-svc-nz57p
Feb 11 01:22:28.325: INFO: Got endpoints: latency-svc-nz57p [1.341629472s]
Feb 11 01:22:28.431: INFO: Created: latency-svc-7lbr8
Feb 11 01:22:28.436: INFO: Got endpoints: latency-svc-7lbr8 [1.368880857s]
Feb 11 01:22:28.484: INFO: Created: latency-svc-bhbss
Feb 11 01:22:28.493: INFO: Got endpoints: latency-svc-bhbss [1.388992915s]
Feb 11 01:22:28.572: INFO: Created: latency-svc-hpnkh
Feb 11 01:22:28.581: INFO: Got endpoints: latency-svc-hpnkh [1.383095129s]
Feb 11 01:22:28.603: INFO: Created: latency-svc-sjdvr
Feb 11 01:22:28.614: INFO: Got endpoints: latency-svc-sjdvr [1.06802695s]
Feb 11 01:22:28.653: INFO: Created: latency-svc-jxqdt
Feb 11 01:22:28.657: INFO: Got endpoints: latency-svc-jxqdt [1.060229081s]
Feb 11 01:22:28.752: INFO: Created: latency-svc-6b2vl
Feb 11 01:22:28.758: INFO: Got endpoints: latency-svc-6b2vl [1.133627267s]
Feb 11 01:22:28.777: INFO: Created: latency-svc-zx775
Feb 11 01:22:28.786: INFO: Got endpoints: latency-svc-zx775 [1.098147254s]
Feb 11 01:22:28.807: INFO: Created: latency-svc-2tnwn
Feb 11 01:22:28.819: INFO: Got endpoints: latency-svc-2tnwn [928.994837ms]
Feb 11 01:22:28.895: INFO: Created: latency-svc-hmmf7
Feb 11 01:22:28.907: INFO: Got endpoints: latency-svc-hmmf7 [965.64581ms]
Feb 11 01:22:28.924: INFO: Created: latency-svc-m42cs
Feb 11 01:22:28.928: INFO: Got endpoints: latency-svc-m42cs [822.281594ms]
Feb 11 01:22:28.970: INFO: Created: latency-svc-b2skw
Feb 11 01:22:28.978: INFO: Got endpoints: latency-svc-b2skw [858.295176ms]
Feb 11 01:22:29.057: INFO: Created: latency-svc-tkf9x
Feb 11 01:22:29.067: INFO: Got endpoints: latency-svc-tkf9x [930.187063ms]
Feb 11 01:22:29.080: INFO: Created: latency-svc-xxdjm
Feb 11 01:22:29.103: INFO: Got endpoints: latency-svc-xxdjm [847.631496ms]
Feb 11 01:22:29.110: INFO: Created: latency-svc-4nlsl
Feb 11 01:22:29.135: INFO: Got endpoints: latency-svc-4nlsl [862.437073ms]
Feb 11 01:22:29.208: INFO: Created: latency-svc-ck7ct
Feb 11 01:22:29.208: INFO: Got endpoints: latency-svc-ck7ct [883.597021ms]
Feb 11 01:22:29.244: INFO: Created: latency-svc-xqm96
Feb 11 01:22:29.252: INFO: Got endpoints: latency-svc-xqm96 [815.480028ms]
Feb 11 01:22:29.380: INFO: Created: latency-svc-8jnkn
Feb 11 01:22:29.416: INFO: Got endpoints: latency-svc-8jnkn [922.226688ms]
Feb 11 01:22:29.479: INFO: Created: latency-svc-47kl5
Feb 11 01:22:29.605: INFO: Got endpoints: latency-svc-47kl5 [1.023793515s]
Feb 11 01:22:29.623: INFO: Created: latency-svc-dgkrk
Feb 11 01:22:29.637: INFO: Got endpoints: latency-svc-dgkrk [1.022688035s]
Feb 11 01:22:29.687: INFO: Created: latency-svc-962qf
Feb 11 01:22:29.694: INFO: Got endpoints: latency-svc-962qf [1.036018861s]
Feb 11 01:22:29.841: INFO: Created: latency-svc-sxk59
Feb 11 01:22:29.850: INFO: Got endpoints: latency-svc-sxk59 [1.091525706s]
Feb 11 01:22:29.891: INFO: Created: latency-svc-drfrn
Feb 11 01:22:29.908: INFO: Got endpoints: latency-svc-drfrn [1.121459705s]
Feb 11 01:22:29.985: INFO: Created: latency-svc-zwt9h
Feb 11 01:22:30.026: INFO: Got endpoints: latency-svc-zwt9h [1.206607618s]
Feb 11 01:22:30.057: INFO: Created: latency-svc-ngjhb
Feb 11 01:22:30.067: INFO: Got endpoints: latency-svc-ngjhb [1.159987919s]
Feb 11 01:22:30.135: INFO: Created: latency-svc-rxlw7
Feb 11 01:22:30.175: INFO: Created: latency-svc-sdvdt
Feb 11 01:22:30.181: INFO: Got endpoints: latency-svc-rxlw7 [1.252464845s]
Feb 11 01:22:30.186: INFO: Got endpoints: latency-svc-sdvdt [1.207849719s]
Feb 11 01:22:30.433: INFO: Created: latency-svc-wfw42
Feb 11 01:22:30.462: INFO: Got endpoints: latency-svc-wfw42 [1.394117544s]
Feb 11 01:22:30.503: INFO: Created: latency-svc-p2kjt
Feb 11 01:22:30.513: INFO: Got endpoints: latency-svc-p2kjt [1.409004346s]
Feb 11 01:22:30.649: INFO: Created: latency-svc-jv9bk
Feb 11 01:22:30.656: INFO: Got endpoints: latency-svc-jv9bk [1.520538422s]
Feb 11 01:22:30.692: INFO: Created: latency-svc-9wprl
Feb 11 01:22:30.735: INFO: Got endpoints: latency-svc-9wprl [1.526473773s]
Feb 11 01:22:30.827: INFO: Created: latency-svc-dk95g
Feb 11 01:22:30.832: INFO: Got endpoints: latency-svc-dk95g [1.579844243s]
Feb 11 01:22:30.866: INFO: Created: latency-svc-nrx2j
Feb 11 01:22:30.882: INFO: Got endpoints: latency-svc-nrx2j [1.465598282s]
Feb 11 01:22:31.030: INFO: Created: latency-svc-t76fh
Feb 11 01:22:31.060: INFO: Created: latency-svc-hg2mx
Feb 11 01:22:31.060: INFO: Got endpoints: latency-svc-t76fh [1.454194028s]
Feb 11 01:22:31.069: INFO: Got endpoints: latency-svc-hg2mx [1.432174257s]
Feb 11 01:22:31.101: INFO: Created: latency-svc-tjsc9
Feb 11 01:22:31.111: INFO: Got endpoints: latency-svc-tjsc9 [1.416745881s]
Feb 11 01:22:31.178: INFO: Created: latency-svc-jzhjt
Feb 11 01:22:31.190: INFO: Got endpoints: latency-svc-jzhjt [1.338751872s]
Feb 11 01:22:31.215: INFO: Created: latency-svc-7c5vm
Feb 11 01:22:31.223: INFO: Got endpoints: latency-svc-7c5vm [1.314313981s]
Feb 11 01:22:31.254: INFO: Created: latency-svc-cpqhn
Feb 11 01:22:31.261: INFO: Got endpoints: latency-svc-cpqhn [1.234173535s]
Feb 11 01:22:31.320: INFO: Created: latency-svc-vnndh
Feb 11 01:22:31.349: INFO: Created: latency-svc-vx5dd
Feb 11 01:22:31.353: INFO: Got endpoints: latency-svc-vnndh [1.285760633s]
Feb 11 01:22:31.386: INFO: Got endpoints: latency-svc-vx5dd [1.20455119s]
Feb 11 01:22:31.392: INFO: Created: latency-svc-8c7hz
Feb 11 01:22:31.397: INFO: Got endpoints: latency-svc-8c7hz [1.211239012s]
Feb 11 01:22:31.482: INFO: Created: latency-svc-bw5pp
Feb 11 01:22:31.485: INFO: Got endpoints: latency-svc-bw5pp [1.022647246s]
Feb 11 01:22:31.527: INFO: Created: latency-svc-5hswp
Feb 11 01:22:31.541: INFO: Got endpoints: latency-svc-5hswp [1.027998818s]
Feb 11 01:22:31.547: INFO: Created: latency-svc-ps2r5
Feb 11 01:22:31.573: INFO: Created: latency-svc-cbgph
Feb 11 01:22:31.573: INFO: Got endpoints: latency-svc-ps2r5 [916.827449ms]
Feb 11 01:22:31.653: INFO: Got endpoints: latency-svc-cbgph [918.291693ms]
Feb 11 01:22:31.664: INFO: Created: latency-svc-6bg8b
Feb 11 01:22:31.673: INFO: Got endpoints: latency-svc-6bg8b [840.4326ms]
Feb 11 01:22:31.694: INFO: Created: latency-svc-6pm7p
Feb 11 01:22:31.702: INFO: Got endpoints: latency-svc-6pm7p [820.514546ms]
Feb 11 01:22:31.724: INFO: Created: latency-svc-ds8t2
Feb 11 01:22:31.729: INFO: Got endpoints: latency-svc-ds8t2 [668.665135ms]
Feb 11 01:22:31.747: INFO: Created: latency-svc-fx74h
Feb 11 01:22:31.818: INFO: Got endpoints: latency-svc-fx74h [748.755276ms]
Feb 11 01:22:31.825: INFO: Created: latency-svc-4dk5v
Feb 11 01:22:31.838: INFO: Got endpoints: latency-svc-4dk5v [727.314254ms]
Feb 11 01:22:31.870: INFO: Created: latency-svc-mzmkv
Feb 11 01:22:31.879: INFO: Got endpoints: latency-svc-mzmkv [689.234415ms]
Feb 11 01:22:31.901: INFO: Created: latency-svc-s6jl4
Feb 11 01:22:31.979: INFO: Got endpoints: latency-svc-s6jl4 [756.03218ms]
Feb 11 01:22:31.994: INFO: Created: latency-svc-b5gdd
Feb 11 01:22:32.005: INFO: Got endpoints: latency-svc-b5gdd [743.745303ms]
Feb 11 01:22:32.036: INFO: Created: latency-svc-crs7x
Feb 11 01:22:32.044: INFO: Got endpoints: latency-svc-crs7x [690.781471ms]
Feb 11 01:22:32.080: INFO: Created: latency-svc-l9lff
Feb 11 01:22:32.156: INFO: Got endpoints: latency-svc-l9lff [770.495531ms]
Feb 11 01:22:32.185: INFO: Created: latency-svc-nxfjq
Feb 11 01:22:32.185: INFO: Got endpoints: latency-svc-nxfjq [788.091378ms]
Feb 11 01:22:32.185: INFO: Created: latency-svc-vg5v2
Feb 11 01:22:32.188: INFO: Got endpoints: latency-svc-vg5v2 [703.652912ms]
Feb 11 01:22:32.206: INFO: Created: latency-svc-r8w2h
Feb 11 01:22:32.225: INFO: Created: latency-svc-7z8nc
Feb 11 01:22:32.231: INFO: Got endpoints: latency-svc-r8w2h [689.735722ms]
Feb 11 01:22:32.328: INFO: Got endpoints: latency-svc-7z8nc [754.248596ms]
Feb 11 01:22:32.330: INFO: Created: latency-svc-xtsgj
Feb 11 01:22:32.342: INFO: Got endpoints: latency-svc-xtsgj [687.929059ms]
Feb 11 01:22:32.381: INFO: Created: latency-svc-dcpdz
Feb 11 01:22:32.391: INFO: Got endpoints: latency-svc-dcpdz [718.596272ms]
Feb 11 01:22:32.412: INFO: Created: latency-svc-4bh8r
Feb 11 01:22:32.415: INFO: Got endpoints: latency-svc-4bh8r [712.914776ms]
Feb 11 01:22:32.483: INFO: Created: latency-svc-74g72
Feb 11 01:22:32.487: INFO: Got endpoints: latency-svc-74g72 [95.147945ms]
Feb 11 01:22:32.520: INFO: Created: latency-svc-ghwlv
Feb 11 01:22:32.530: INFO: Got endpoints: latency-svc-ghwlv [800.960915ms]
Feb 11 01:22:32.567: INFO: Created: latency-svc-s9m9j
Feb 11 01:22:32.631: INFO: Got endpoints: latency-svc-s9m9j [813.296307ms]
Feb 11 01:22:32.852: INFO: Created: latency-svc-6886z
Feb 11 01:22:32.904: INFO: Created: latency-svc-7fskg
Feb 11 01:22:32.904: INFO: Got endpoints: latency-svc-6886z [1.065768468s]
Feb 11 01:22:32.916: INFO: Got endpoints: latency-svc-7fskg [1.036332087s]
Feb 11 01:22:33.041: INFO: Created: latency-svc-pfxzl
Feb 11 01:22:33.069: INFO: Got endpoints: latency-svc-pfxzl [1.089816072s]
Feb 11 01:22:33.070: INFO: Created: latency-svc-jbbwt
Feb 11 01:22:33.091: INFO: Got endpoints: latency-svc-jbbwt [1.085988832s]
Feb 11 01:22:33.119: INFO: Created: latency-svc-jq929
Feb 11 01:22:33.195: INFO: Got endpoints: latency-svc-jq929 [1.150535368s]
Feb 11 01:22:33.197: INFO: Created: latency-svc-ssv7m
Feb 11 01:22:33.204: INFO: Got endpoints: latency-svc-ssv7m [1.047937285s]
Feb 11 01:22:33.224: INFO: Created: latency-svc-t5qpl
Feb 11 01:22:33.258: INFO: Got endpoints: latency-svc-t5qpl [1.072319501s]
Feb 11 01:22:33.285: INFO: Created: latency-svc-nwvr7
Feb 11 01:22:33.336: INFO: Got endpoints: latency-svc-nwvr7 [1.147936055s]
Feb 11 01:22:33.342: INFO: Created: latency-svc-x4h2c
Feb 11 01:22:33.349: INFO: Got endpoints: latency-svc-x4h2c [1.117118522s]
Feb 11 01:22:33.385: INFO: Created: latency-svc-749rg
Feb 11 01:22:33.408: INFO: Got endpoints: latency-svc-749rg [1.08013697s]
Feb 11 01:22:33.430: INFO: Created: latency-svc-4f5q7
Feb 11 01:22:33.482: INFO: Got endpoints: latency-svc-4f5q7 [1.13999776s]
Feb 11 01:22:33.505: INFO: Created: latency-svc-9skfd
Feb 11 01:22:33.517: INFO: Got endpoints: latency-svc-9skfd [1.101474878s]
Feb 11 01:22:33.542: INFO: Created: latency-svc-b6d9f
Feb 11 01:22:33.547: INFO: Got endpoints: latency-svc-b6d9f [1.060183209s]
Feb 11 01:22:33.574: INFO: Created: latency-svc-9pv6x
Feb 11 01:22:33.577: INFO: Got endpoints: latency-svc-9pv6x [1.047216473s]
Feb 11 01:22:33.652: INFO: Created: latency-svc-jmq9p
Feb 11 01:22:33.659: INFO: Got endpoints: latency-svc-jmq9p [1.027629334s]
Feb 11 01:22:33.697: INFO: Created: latency-svc-52qxf
Feb 11 01:22:33.739: INFO: Got endpoints: latency-svc-52qxf [834.622775ms]
Feb 11 01:22:33.747: INFO: Created: latency-svc-8gvjw
Feb 11 01:22:33.747: INFO: Got endpoints: latency-svc-8gvjw [830.968585ms]
Feb 11 01:22:33.802: INFO: Created: latency-svc-vwdts
Feb 11 01:22:33.822: INFO: Got endpoints: latency-svc-vwdts [753.245416ms]
Feb 11 01:22:33.824: INFO: Created: latency-svc-fjkzk
Feb 11 01:22:33.826: INFO: Got endpoints: latency-svc-fjkzk [735.542315ms]
Feb 11 01:22:33.902: INFO: Created: latency-svc-96k9n
Feb 11 01:22:33.961: INFO: Got endpoints: latency-svc-96k9n [766.313857ms]
Feb 11 01:22:33.985: INFO: Created: latency-svc-tnj6f
Feb 11 01:22:33.993: INFO: Got endpoints: latency-svc-tnj6f [788.746078ms]
Feb 11 01:22:34.040: INFO: Created: latency-svc-8znf5
Feb 11 01:22:34.164: INFO: Got endpoints: latency-svc-8znf5 [906.421578ms]
Feb 11 01:22:34.168: INFO: Created: latency-svc-bhdht
Feb 11 01:22:34.190: INFO: Got endpoints: latency-svc-bhdht [852.957149ms]
Feb 11 01:22:34.238: INFO: Created: latency-svc-5dzzl
Feb 11 01:22:34.243: INFO: Got endpoints: latency-svc-5dzzl [894.085107ms]
Feb 11 01:22:34.308: INFO: Created: latency-svc-hn8cw
Feb 11 01:22:34.319: INFO: Got endpoints: latency-svc-hn8cw [910.309435ms]
Feb 11 01:22:34.356: INFO: Created: latency-svc-k47ll
Feb 11 01:22:34.367: INFO: Got endpoints: latency-svc-k47ll [885.755622ms]
Feb 11 01:22:34.409: INFO: Created: latency-svc-tmvvn
Feb 11 01:22:34.477: INFO: Got endpoints: latency-svc-tmvvn [959.462184ms]
Feb 11 01:22:34.481: INFO: Created: latency-svc-c7knv
Feb 11 01:22:34.487: INFO: Got endpoints: latency-svc-c7knv [940.314106ms]
Feb 11 01:22:34.514: INFO: Created: latency-svc-gx56c
Feb 11 01:22:34.531: INFO: Got endpoints: latency-svc-gx56c [953.378747ms]
Feb 11 01:22:34.572: INFO: Created: latency-svc-ccvhp
Feb 11 01:22:34.643: INFO: Got endpoints: latency-svc-ccvhp [984.14679ms]
Feb 11 01:22:34.656: INFO: Created: latency-svc-vcs8r
Feb 11 01:22:34.667: INFO: Got endpoints: latency-svc-vcs8r [927.592788ms]
Feb 11 01:22:34.706: INFO: Created: latency-svc-lqz5w
Feb 11 01:22:34.719: INFO: Got endpoints: latency-svc-lqz5w [972.641733ms]
Feb 11 01:22:34.724: INFO: Created: latency-svc-852d2
Feb 11 01:22:34.819: INFO: Got endpoints: latency-svc-852d2 [996.699765ms]
Feb 11 01:22:34.836: INFO: Created: latency-svc-h2zsw
Feb 11 01:22:34.836: INFO: Got endpoints: latency-svc-h2zsw [1.009587063s]
Feb 11 01:22:34.884: INFO: Created: latency-svc-4c7jl
Feb 11 01:22:34.888: INFO: Got endpoints: latency-svc-4c7jl [926.401044ms]
Feb 11 01:22:35.101: INFO: Created: latency-svc-2tbc2
Feb 11 01:22:35.108: INFO: Got endpoints: latency-svc-2tbc2 [1.114588754s]
Feb 11 01:22:35.161: INFO: Created: latency-svc-qvxwt
Feb 11 01:22:35.167: INFO: Got endpoints: latency-svc-qvxwt [1.002001542s]
Feb 11 01:22:35.434: INFO: Created: latency-svc-9qr28
Feb 11 01:22:35.449: INFO: Got endpoints: latency-svc-9qr28 [1.259288937s]
Feb 11 01:22:35.480: INFO: Created: latency-svc-7n674
Feb 11 01:22:35.509: INFO: Got endpoints: latency-svc-7n674 [1.265684394s]
Feb 11 01:22:35.729: INFO: Created: latency-svc-p9p8c
Feb 11 01:22:35.770: INFO: Got endpoints: latency-svc-p9p8c [1.451085937s]
Feb 11 01:22:35.775: INFO: Created: latency-svc-xhlsx
Feb 11 01:22:35.810: INFO: Got endpoints: latency-svc-xhlsx [1.442025088s]
Feb 11 01:22:35.814: INFO: Created: latency-svc-cqrxd
Feb 11 01:22:35.820: INFO: Got endpoints: latency-svc-cqrxd [1.342772663s]
Feb 11 01:22:35.892: INFO: Created: latency-svc-r5n9f
Feb 11 01:22:35.902: INFO: Got endpoints: latency-svc-r5n9f [1.414414067s]
Feb 11 01:22:35.935: INFO: Created: latency-svc-n76xn
Feb 11 01:22:35.944: INFO: Got endpoints: latency-svc-n76xn [1.412838738s]
Feb 11 01:22:35.987: INFO: Created: latency-svc-2q7np
Feb 11 01:22:36.030: INFO: Got endpoints: latency-svc-2q7np [1.386248301s]
Feb 11 01:22:36.045: INFO: Created: latency-svc-9lfbn
Feb 11 01:22:36.060: INFO: Got endpoints: latency-svc-9lfbn [1.392355603s]
Feb 11 01:22:36.082: INFO: Created: latency-svc-hldcj
Feb 11 01:22:36.113: INFO: Got endpoints: latency-svc-hldcj [1.39357529s]
Feb 11 01:22:36.184: INFO: Created: latency-svc-gn7g7
Feb 11 01:22:36.212: INFO: Got endpoints: latency-svc-gn7g7 [1.392223633s]
Feb 11 01:22:36.216: INFO: Created: latency-svc-zbss7
Feb 11 01:22:36.224: INFO: Got endpoints: latency-svc-zbss7 [1.387726584s]
Feb 11 01:22:36.246: INFO: Created: latency-svc-p7wjh
Feb 11 01:22:36.256: INFO: Got endpoints: latency-svc-p7wjh [1.367956912s]
Feb 11 01:22:36.324: INFO: Created: latency-svc-bcm75
Feb 11 01:22:36.333: INFO: Got endpoints: latency-svc-bcm75 [1.224920453s]
Feb 11 01:22:36.377: INFO: Created: latency-svc-d5d4l
Feb 11 01:22:36.385: INFO: Got endpoints: latency-svc-d5d4l [1.217795533s]
Feb 11 01:22:36.474: INFO: Created: latency-svc-rnx2n
Feb 11 01:22:36.506: INFO: Created: latency-svc-28kpn
Feb 11 01:22:36.507: INFO: Got endpoints: latency-svc-rnx2n [1.057975429s]
Feb 11 01:22:36.520: INFO: Got endpoints: latency-svc-28kpn [1.010806179s]
Feb 11 01:22:36.546: INFO: Created: latency-svc-b6wrw
Feb 11 01:22:36.556: INFO: Got endpoints: latency-svc-b6wrw [785.401594ms]
Feb 11 01:22:36.618: INFO: Created: latency-svc-gfv49
Feb 11 01:22:36.650: INFO: Got endpoints: latency-svc-gfv49 [840.117944ms]
Feb 11 01:22:36.652: INFO: Created: latency-svc-ddg48
Feb 11 01:22:36.680: INFO: Got endpoints: latency-svc-ddg48 [860.451836ms]
Feb 11 01:22:36.763: INFO: Created: latency-svc-tlft4
Feb 11 01:22:36.795: INFO: Got endpoints: latency-svc-tlft4 [893.082247ms]
Feb 11 01:22:36.799: INFO: Created: latency-svc-nm7zq
Feb 11 01:22:36.816: INFO: Got endpoints: latency-svc-nm7zq [871.827098ms]
Feb 11 01:22:36.834: INFO: Created: latency-svc-zn8r2
Feb 11 01:22:36.868: INFO: Got endpoints: latency-svc-zn8r2 [838.55349ms]
Feb 11 01:22:36.944: INFO: Created: latency-svc-mrwhw
Feb 11 01:22:36.960: INFO: Got endpoints: latency-svc-mrwhw [899.862429ms]
Feb 11 01:22:37.032: INFO: Created: latency-svc-q77q5
Feb 11 01:22:37.092: INFO: Got endpoints: latency-svc-q77q5 [978.870118ms]
Feb 11 01:22:37.104: INFO: Created: latency-svc-7ptnm
Feb 11 01:22:37.114: INFO: Got endpoints: latency-svc-7ptnm [901.926615ms]
Feb 11 01:22:37.173: INFO: Created: latency-svc-rnr5f
Feb 11 01:22:37.178: INFO: Got endpoints: latency-svc-rnr5f [954.580974ms]
Feb 11 01:22:37.222: INFO: Created: latency-svc-t8swf
Feb 11 01:22:37.230: INFO: Got endpoints: latency-svc-t8swf [973.154655ms]
Feb 11 01:22:37.255: INFO: Created: latency-svc-wh2cn
Feb 11 01:22:37.261: INFO: Got endpoints: latency-svc-wh2cn [927.553417ms]
Feb 11 01:22:37.277: INFO: Created: latency-svc-krm6k
Feb 11 01:22:37.286: INFO: Got endpoints: latency-svc-krm6k [901.197036ms]
Feb 11 01:22:37.303: INFO: Created: latency-svc-w98sk
Feb 11 01:22:37.304: INFO: Got endpoints: latency-svc-w98sk [795.908839ms]
Feb 11 01:22:37.386: INFO: Created: latency-svc-zx49m
Feb 11 01:22:37.386: INFO: Got endpoints: latency-svc-zx49m [866.652255ms]
Feb 11 01:22:37.430: INFO: Created: latency-svc-6z9gc
Feb 11 01:22:37.467: INFO: Got endpoints: latency-svc-6z9gc [911.515133ms]
Feb 11 01:22:37.541: INFO: Created: latency-svc-6nmq2
Feb 11 01:22:37.547: INFO: Got endpoints: latency-svc-6nmq2 [896.830187ms]
Feb 11 01:22:37.599: INFO: Created: latency-svc-26mfq
Feb 11 01:22:37.602: INFO: Got endpoints: latency-svc-26mfq [921.426518ms]
Feb 11 01:22:37.643: INFO: Created: latency-svc-jl6jb
Feb 11 01:22:37.686: INFO: Got endpoints: latency-svc-jl6jb [891.152342ms]
Feb 11 01:22:37.713: INFO: Created: latency-svc-44dtv
Feb 11 01:22:37.729: INFO: Got endpoints: latency-svc-44dtv [912.553261ms]
Feb 11 01:22:37.755: INFO: Created: latency-svc-zlc2f
Feb 11 01:22:37.848: INFO: Got endpoints: latency-svc-zlc2f [979.234531ms]
Feb 11 01:22:37.850: INFO: Created: latency-svc-8l8r8
Feb 11 01:22:37.854: INFO: Got endpoints: latency-svc-8l8r8 [894.532741ms]
Feb 11 01:22:37.905: INFO: Created: latency-svc-7d6gm
Feb 11 01:22:37.919: INFO: Got endpoints: latency-svc-7d6gm [826.480811ms]
Feb 11 01:22:37.932: INFO: Created: latency-svc-b85m4
Feb 11 01:22:37.936: INFO: Got endpoints: latency-svc-b85m4 [821.935706ms]
Feb 11 01:22:38.025: INFO: Created: latency-svc-jvkhm
Feb 11 01:22:38.038: INFO: Got endpoints: latency-svc-jvkhm [859.257991ms]
Feb 11 01:22:38.051: INFO: Created: latency-svc-9fx88
Feb 11 01:22:38.053: INFO: Got endpoints: latency-svc-9fx88 [823.757919ms]
Feb 11 01:22:38.076: INFO: Created: latency-svc-bfplk
Feb 11 01:22:38.078: INFO: Got endpoints: latency-svc-bfplk [817.30819ms]
Feb 11 01:22:38.161: INFO: Created: latency-svc-kcssj
Feb 11 01:22:38.170: INFO: Got endpoints: latency-svc-kcssj [884.120499ms]
Feb 11 01:22:38.205: INFO: Created: latency-svc-5xtgc
Feb 11 01:22:38.213: INFO: Got endpoints: latency-svc-5xtgc [908.983499ms]
Feb 11 01:22:38.235: INFO: Created: latency-svc-xlg8b
Feb 11 01:22:38.249: INFO: Got endpoints: latency-svc-xlg8b [862.132997ms]
Feb 11 01:22:38.327: INFO: Created: latency-svc-rhhhc
Feb 11 01:22:38.328: INFO: Got endpoints: latency-svc-rhhhc [860.13831ms]
Feb 11 01:22:38.369: INFO: Created: latency-svc-vjgf7
Feb 11 01:22:38.371: INFO: Got endpoints: latency-svc-vjgf7 [823.280138ms]
Feb 11 01:22:38.397: INFO: Created: latency-svc-xgkc7
Feb 11 01:22:38.408: INFO: Got endpoints: latency-svc-xgkc7 [805.916358ms]
Feb 11 01:22:38.493: INFO: Created: latency-svc-p68wj
Feb 11 01:22:38.519: INFO: Got endpoints: latency-svc-p68wj [832.869093ms]
Feb 11 01:22:38.544: INFO: Created: latency-svc-sm5fj
Feb 11 01:22:38.551: INFO: Got endpoints: latency-svc-sm5fj [822.559126ms]
Feb 11 01:22:38.585: INFO: Created: latency-svc-jc8nb
Feb 11 01:22:38.657: INFO: Got endpoints: latency-svc-jc8nb [808.492938ms]
Feb 11 01:22:38.663: INFO: Created: latency-svc-n9zlq
Feb 11 01:22:38.680: INFO: Got endpoints: latency-svc-n9zlq [825.77593ms]
Feb 11 01:22:38.745: INFO: Created: latency-svc-mp7sw
Feb 11 01:22:38.750: INFO: Got endpoints: latency-svc-mp7sw [830.953727ms]
Feb 11 01:22:38.806: INFO: Created: latency-svc-xz5vp
Feb 11 01:22:38.811: INFO: Got endpoints: latency-svc-xz5vp [874.924232ms]
Feb 11 01:22:38.835: INFO: Created: latency-svc-cm4kl
Feb 11 01:22:38.841: INFO: Got endpoints: latency-svc-cm4kl [802.835864ms]
Feb 11 01:22:38.868: INFO: Created: latency-svc-lbqqj
Feb 11 01:22:38.873: INFO: Got endpoints: latency-svc-lbqqj [819.239906ms]
Feb 11 01:22:38.996: INFO: Created: latency-svc-rshrl
Feb 11 01:22:39.044: INFO: Got endpoints: latency-svc-rshrl [965.616923ms]
Feb 11 01:22:39.057: INFO: Created: latency-svc-gqffh
Feb 11 01:22:39.073: INFO: Got endpoints: latency-svc-gqffh [902.701799ms]
Feb 11 01:22:39.078: INFO: Created: latency-svc-cvm99
Feb 11 01:22:39.085: INFO: Got endpoints: latency-svc-cvm99 [871.86002ms]
Feb 11 01:22:39.149: INFO: Created: latency-svc-dxzzs
Feb 11 01:22:39.154: INFO: Got endpoints: latency-svc-dxzzs [905.513887ms]
Feb 11 01:22:39.173: INFO: Created: latency-svc-zg4dc
Feb 11 01:22:39.174: INFO: Got endpoints: latency-svc-zg4dc [846.027455ms]
Feb 11 01:22:39.188: INFO: Created: latency-svc-vhrqj
Feb 11 01:22:39.205: INFO: Got endpoints: latency-svc-vhrqj [834.693468ms]
Feb 11 01:22:39.208: INFO: Created: latency-svc-xhs2n
Feb 11 01:22:39.214: INFO: Got endpoints: latency-svc-xhs2n [805.518899ms]
Feb 11 01:22:39.240: INFO: Created: latency-svc-l8sgf
Feb 11 01:22:39.302: INFO: Got endpoints: latency-svc-l8sgf [782.578768ms]
Feb 11 01:22:39.325: INFO: Created: latency-svc-kvzfr
Feb 11 01:22:39.327: INFO: Got endpoints: latency-svc-kvzfr [775.683322ms]
Feb 11 01:22:39.352: INFO: Created: latency-svc-j8ctq
Feb 11 01:22:39.359: INFO: Got endpoints: latency-svc-j8ctq [702.583735ms]
Feb 11 01:22:39.385: INFO: Created: latency-svc-t9xt2
Feb 11 01:22:39.393: INFO: Got endpoints: latency-svc-t9xt2 [712.444325ms]
Feb 11 01:22:39.465: INFO: Created: latency-svc-7hkjq
Feb 11 01:22:39.509: INFO: Got endpoints: latency-svc-7hkjq [758.073216ms]
Feb 11 01:22:39.518: INFO: Created: latency-svc-vc24h
Feb 11 01:22:39.531: INFO: Got endpoints: latency-svc-vc24h [719.506577ms]
Feb 11 01:22:39.567: INFO: Created: latency-svc-jqqlb
Feb 11 01:22:39.614: INFO: Got endpoints: latency-svc-jqqlb [772.857062ms]
Feb 11 01:22:39.639: INFO: Created: latency-svc-jbfl8
Feb 11 01:22:39.645: INFO: Got endpoints: latency-svc-jbfl8 [772.204673ms]
Feb 11 01:22:39.674: INFO: Created: latency-svc-l9ht2
Feb 11 01:22:39.678: INFO: Got endpoints: latency-svc-l9ht2 [634.388246ms]
Feb 11 01:22:39.679: INFO: Latencies: [94.710042ms 95.147945ms 138.633507ms 204.510086ms 315.235419ms 346.101806ms 372.454213ms 387.345734ms 515.361473ms 606.980173ms 634.388246ms 668.665135ms 687.929059ms 689.234415ms 689.735722ms 690.781471ms 691.499017ms 702.583735ms 703.652912ms 712.444325ms 712.914776ms 718.596272ms 719.506577ms 727.314254ms 728.99738ms 735.542315ms 743.745303ms 748.755276ms 753.245416ms 754.248596ms 756.03218ms 758.073216ms 766.313857ms 770.495531ms 772.204673ms 772.857062ms 775.683322ms 782.578768ms 785.401594ms 788.091378ms 788.746078ms 795.908839ms 800.960915ms 802.835864ms 805.518899ms 805.916358ms 808.492938ms 813.296307ms 815.480028ms 817.30819ms 819.239906ms 820.514546ms 821.935706ms 822.038888ms 822.281594ms 822.559126ms 823.280138ms 823.757919ms 825.77593ms 826.480811ms 830.953727ms 830.968585ms 832.869093ms 834.622775ms 834.693468ms 838.55349ms 840.117944ms 840.4326ms 846.027455ms 847.631496ms 852.957149ms 858.295176ms 859.257991ms 860.13831ms 860.451836ms 862.132997ms 862.437073ms 866.652255ms 871.827098ms 871.86002ms 874.924232ms 883.597021ms 884.120499ms 885.755622ms 891.152342ms 893.082247ms 894.085107ms 894.532741ms 896.830187ms 899.862429ms 901.197036ms 901.926615ms 902.701799ms 905.513887ms 906.421578ms 908.983499ms 910.309435ms 911.515133ms 912.553261ms 916.827449ms 918.291693ms 921.426518ms 922.226688ms 926.401044ms 927.553417ms 927.592788ms 928.994837ms 930.187063ms 940.314106ms 953.378747ms 954.580974ms 959.462184ms 965.616923ms 965.64581ms 972.641733ms 973.154655ms 978.870118ms 979.234531ms 984.14679ms 996.699765ms 1.002001542s 1.009587063s 1.010806179s 1.022647246s 1.022688035s 1.023793515s 1.027629334s 1.027998818s 1.036018861s 1.036332087s 1.047216473s 1.047937285s 1.057975429s 1.060183209s 1.060229081s 1.065768468s 1.06802695s 1.072319501s 1.08013697s 1.085988832s 1.089816072s 1.091525706s 1.098147254s 1.101474878s 1.114588754s 1.117118522s 1.121459705s 1.133627267s 1.13999776s 1.147936055s 1.150535368s 1.159987919s 1.169676726s 1.20455119s 1.206607618s 1.207849719s 1.211239012s 1.217639524s 1.217795533s 1.221308773s 1.224920453s 1.234173535s 1.250170607s 1.252464845s 1.259288937s 1.265684394s 1.285760633s 1.314313981s 1.338751872s 1.341629472s 1.342772663s 1.362023263s 1.367956912s 1.368880857s 1.376836554s 1.381491052s 1.383095129s 1.386248301s 1.387726584s 1.388992915s 1.389984384s 1.392223633s 1.392355603s 1.39357529s 1.394117544s 1.39706394s 1.409004346s 1.412838738s 1.414414067s 1.415072301s 1.416745881s 1.432174257s 1.442025088s 1.451085937s 1.454194028s 1.465598282s 1.492248941s 1.520538422s 1.526473773s 1.579844243s]
Feb 11 01:22:39.679: INFO: 50 %ile: 918.291693ms
Feb 11 01:22:39.679: INFO: 90 %ile: 1.389984384s
Feb 11 01:22:39.679: INFO: 99 %ile: 1.526473773s
Feb 11 01:22:39.679: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:22:39.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5231" for this suite.

• [SLOW TEST:24.773 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":264,"skipped":4381,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:22:39.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 11 01:22:39.847: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 11 01:22:39.857: INFO: Number of nodes with available pods: 0
Feb 11 01:22:39.857: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 11 01:22:39.913: INFO: Number of nodes with available pods: 0
Feb 11 01:22:39.914: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:40.925: INFO: Number of nodes with available pods: 0
Feb 11 01:22:40.926: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:41.921: INFO: Number of nodes with available pods: 0
Feb 11 01:22:41.921: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:42.927: INFO: Number of nodes with available pods: 0
Feb 11 01:22:42.927: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:43.921: INFO: Number of nodes with available pods: 0
Feb 11 01:22:43.921: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:44.921: INFO: Number of nodes with available pods: 0
Feb 11 01:22:44.921: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:45.927: INFO: Number of nodes with available pods: 0
Feb 11 01:22:45.927: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:47.005: INFO: Number of nodes with available pods: 0
Feb 11 01:22:47.005: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:47.921: INFO: Number of nodes with available pods: 0
Feb 11 01:22:47.921: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:48.947: INFO: Number of nodes with available pods: 0
Feb 11 01:22:48.947: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:49.919: INFO: Number of nodes with available pods: 0
Feb 11 01:22:49.919: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:50.923: INFO: Number of nodes with available pods: 1
Feb 11 01:22:50.923: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 11 01:22:51.112: INFO: Number of nodes with available pods: 1
Feb 11 01:22:51.112: INFO: Number of running nodes: 0, number of available pods: 1
Feb 11 01:22:52.129: INFO: Number of nodes with available pods: 0
Feb 11 01:22:52.129: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 11 01:22:52.159: INFO: Number of nodes with available pods: 0
Feb 11 01:22:52.159: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:53.200: INFO: Number of nodes with available pods: 0
Feb 11 01:22:53.200: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:54.349: INFO: Number of nodes with available pods: 0
Feb 11 01:22:54.349: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:55.163: INFO: Number of nodes with available pods: 0
Feb 11 01:22:55.163: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:56.190: INFO: Number of nodes with available pods: 0
Feb 11 01:22:56.190: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:57.255: INFO: Number of nodes with available pods: 0
Feb 11 01:22:57.255: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:58.449: INFO: Number of nodes with available pods: 0
Feb 11 01:22:58.449: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:22:59.179: INFO: Number of nodes with available pods: 0
Feb 11 01:22:59.179: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:00.198: INFO: Number of nodes with available pods: 0
Feb 11 01:23:00.198: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:01.237: INFO: Number of nodes with available pods: 0
Feb 11 01:23:01.237: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:02.208: INFO: Number of nodes with available pods: 0
Feb 11 01:23:02.208: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:03.184: INFO: Number of nodes with available pods: 0
Feb 11 01:23:03.184: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:04.180: INFO: Number of nodes with available pods: 0
Feb 11 01:23:04.180: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:05.242: INFO: Number of nodes with available pods: 0
Feb 11 01:23:05.242: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:06.280: INFO: Number of nodes with available pods: 0
Feb 11 01:23:06.280: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:07.223: INFO: Number of nodes with available pods: 0
Feb 11 01:23:07.223: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:08.198: INFO: Number of nodes with available pods: 0
Feb 11 01:23:08.198: INFO: Node jerma-node is running more than one daemon pod
Feb 11 01:23:09.172: INFO: Number of nodes with available pods: 1
Feb 11 01:23:09.172: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6599, will wait for the garbage collector to delete the pods
Feb 11 01:23:09.266: INFO: Deleting DaemonSet.extensions daemon-set took: 12.532244ms
Feb 11 01:23:09.666: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.574296ms
Feb 11 01:23:22.371: INFO: Number of nodes with available pods: 0
Feb 11 01:23:22.371: INFO: Number of running nodes: 0, number of available pods: 0
Feb 11 01:23:22.374: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6599/daemonsets","resourceVersion":"7656091"},"items":null}

Feb 11 01:23:22.377: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6599/pods","resourceVersion":"7656091"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:22.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6599" for this suite.

• [SLOW TEST:42.753 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":265,"skipped":4397,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:22.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-2d3dee61-4b9b-495d-9977-c1c217bb1725
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:22.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5396" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":266,"skipped":4407,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:22.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9124.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9124.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 11 01:23:35.231: INFO: DNS probes using dns-9124/dns-test-331c7f5f-8ff6-4c29-b971-9d454fe645dd succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:35.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9124" for this suite.

• [SLOW TEST:12.499 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":267,"skipped":4407,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:35.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 11 01:23:35.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4163'
Feb 11 01:23:35.564: INFO: stderr: ""
Feb 11 01:23:35.565: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Feb 11 01:23:35.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4163'
Feb 11 01:23:41.979: INFO: stderr: ""
Feb 11 01:23:41.979: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:41.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4163" for this suite.

• [SLOW TEST:6.675 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":268,"skipped":4415,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:41.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 11 01:23:42.102: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:54.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1371" for this suite.

• [SLOW TEST:12.412 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4418,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:54.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:23:54.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4146" for this suite.
STEP: Destroying namespace "nspatchtest-1cc2b1ef-4f9f-4760-889f-46233bb34da7-1668" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":270,"skipped":4424,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:23:54.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 11 01:23:54.960: INFO: Waiting up to 5m0s for pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67" in namespace "emptydir-2153" to be "success or failure"
Feb 11 01:23:54.986: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67": Phase="Pending", Reason="", readiness=false. Elapsed: 26.249239ms
Feb 11 01:23:56.994: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034141834s
Feb 11 01:23:59.000: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040290113s
Feb 11 01:24:01.009: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048524918s
Feb 11 01:24:03.013: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052909724s
STEP: Saw pod success
Feb 11 01:24:03.013: INFO: Pod "pod-2fead655-ba4f-4878-9cc2-5b2113986d67" satisfied condition "success or failure"
Feb 11 01:24:03.016: INFO: Trying to get logs from node jerma-node pod pod-2fead655-ba4f-4878-9cc2-5b2113986d67 container test-container: 
STEP: delete the pod
Feb 11 01:24:03.139: INFO: Waiting for pod pod-2fead655-ba4f-4878-9cc2-5b2113986d67 to disappear
Feb 11 01:24:03.162: INFO: Pod pod-2fead655-ba4f-4878-9cc2-5b2113986d67 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:24:03.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2153" for this suite.

• [SLOW TEST:8.574 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":271,"skipped":4425,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:24:03.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 11 01:24:10.848: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:24:10.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6826" for this suite.

• [SLOW TEST:7.627 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":272,"skipped":4435,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:24:10.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 11 01:24:11.218: INFO: Waiting up to 5m0s for pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0" in namespace "downward-api-3871" to be "success or failure"
Feb 11 01:24:11.227: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.966147ms
Feb 11 01:24:13.233: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015398671s
Feb 11 01:24:15.239: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021642239s
Feb 11 01:24:17.248: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029946594s
Feb 11 01:24:19.253: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035083469s
Feb 11 01:24:21.260: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.042366306s
STEP: Saw pod success
Feb 11 01:24:21.260: INFO: Pod "downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0" satisfied condition "success or failure"
Feb 11 01:24:21.264: INFO: Trying to get logs from node jerma-node pod downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0 container dapi-container: 
STEP: delete the pod
Feb 11 01:24:21.506: INFO: Waiting for pod downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0 to disappear
Feb 11 01:24:21.520: INFO: Pod downward-api-b643dbb6-9cad-4f2b-8dad-69fba66463e0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:24:21.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3871" for this suite.

• [SLOW TEST:10.583 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4459,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:24:21.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 11 01:24:30.348: INFO: Successfully updated pod "labelsupdateda84e7db-dc48-4a08-9b3d-764435dd6e90"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:24:32.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6564" for this suite.

• [SLOW TEST:10.890 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":274,"skipped":4465,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:24:32.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:25:19.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-52" for this suite.

• [SLOW TEST:47.135 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4468,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:25:19.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Feb 11 01:25:19.710: INFO: Waiting up to 5m0s for pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673" in namespace "containers-7891" to be "success or failure"
Feb 11 01:25:19.739: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673": Phase="Pending", Reason="", readiness=false. Elapsed: 27.981276ms
Feb 11 01:25:21.753: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042117398s
Feb 11 01:25:23.761: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050585506s
Feb 11 01:25:25.766: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055813403s
Feb 11 01:25:27.780: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06907078s
STEP: Saw pod success
Feb 11 01:25:27.780: INFO: Pod "client-containers-b17ed336-44f4-4d36-9993-934e68091673" satisfied condition "success or failure"
Feb 11 01:25:27.786: INFO: Trying to get logs from node jerma-node pod client-containers-b17ed336-44f4-4d36-9993-934e68091673 container test-container: 
STEP: delete the pod
Feb 11 01:25:27.897: INFO: Waiting for pod client-containers-b17ed336-44f4-4d36-9993-934e68091673 to disappear
Feb 11 01:25:27.922: INFO: Pod client-containers-b17ed336-44f4-4d36-9993-934e68091673 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:25:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7891" for this suite.

• [SLOW TEST:8.378 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4510,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:25:27.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:25:36.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7929" for this suite.

• [SLOW TEST:8.509 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":277,"skipped":4530,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:25:36.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-4c47812b-e71f-434d-828a-0570ea74a402
STEP: Creating a pod to test consume secrets
Feb 11 01:25:36.614: INFO: Waiting up to 5m0s for pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c" in namespace "secrets-7726" to be "success or failure"
Feb 11 01:25:36.823: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Pending", Reason="", readiness=false. Elapsed: 209.222028ms
Feb 11 01:25:38.831: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216916941s
Feb 11 01:25:40.838: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224329249s
Feb 11 01:25:42.853: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238492828s
Feb 11 01:25:44.866: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251606666s
Feb 11 01:25:46.875: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.260796316s
STEP: Saw pod success
Feb 11 01:25:46.875: INFO: Pod "pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c" satisfied condition "success or failure"
Feb 11 01:25:46.882: INFO: Trying to get logs from node jerma-node pod pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c container secret-env-test: 
STEP: delete the pod
Feb 11 01:25:47.366: INFO: Waiting for pod pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c to disappear
Feb 11 01:25:47.380: INFO: Pod pod-secrets-982e0efd-4a1d-481c-b271-0d210cb2710c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:25:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7726" for this suite.

• [SLOW TEST:10.942 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":278,"skipped":4559,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 11 01:25:47.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 11 01:25:55.588: INFO: &Pod{ObjectMeta:{send-events-cc4150ec-f1de-45ea-8173-b121bab16bb5  events-2673 /api/v1/namespaces/events-2673/pods/send-events-cc4150ec-f1de-45ea-8173-b121bab16bb5 c28cc783-9b3c-47a8-9c9b-ff4274773d3c 7656835 0 2020-02-11 01:25:47 +0000 UTC   map[name:foo time:533186557] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h2khj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h2khj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h2khj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:25:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:25:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:25:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-11 01:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-11 01:25:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-11 01:25:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://2858ab3b9ccd3744cd2f6d17caf629c3d8eb976eca500ab7dc40b9dcb4516b57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 11 01:25:57.598: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 11 01:25:59.605: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 11 01:25:59.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2673" for this suite.

• [SLOW TEST:12.298 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
Feb 11 01:25:59.692: INFO: Running AfterSuite actions on all nodes
Feb 11 01:25:59.692: INFO: Running AfterSuite actions on node 1
Feb 11 01:25:59.692: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 6412.446 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (6412.56s)
FAIL