I0125 09:41:55.523237 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0125 09:41:55.523980 9 e2e.go:109] Starting e2e run "322dc050-c61b-43b0-8ec7-4963302458f4" on Ginkgo node 1 {"msg":"Test Suite starting","total":279,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579945313 - Will randomize all specs Will run 279 of 4845 specs Jan 25 09:41:55.603: INFO: >>> kubeConfig: /root/.kube/config Jan 25 09:41:55.606: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 25 09:41:55.642: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 25 09:41:55.699: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 25 09:41:55.699: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 25 09:41:55.699: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 25 09:41:55.713: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 25 09:41:55.713: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 25 09:41:55.713: INFO: e2e test version: v1.18.0-alpha.2.147+98f63eee1bf251 Jan 25 09:41:55.715: INFO: kube-apiserver version: v1.17.0 Jan 25 09:41:55.715: INFO: >>> kubeConfig: /root/.kube/config Jan 25 09:41:55.723: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:41:55.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 25 09:41:55.879: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Jan 25 09:41:55.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4037' Jan 25 09:41:58.186: INFO: stderr: "" Jan 25 09:41:58.186: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 25 09:41:59.194: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:41:59.194: INFO: Found 0 / 1 Jan 25 09:42:00.198: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:00.198: INFO: Found 0 / 1 Jan 25 09:42:01.194: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:01.194: INFO: Found 0 / 1 Jan 25 09:42:02.196: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:02.196: INFO: Found 0 / 1 Jan 25 09:42:03.197: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:03.197: INFO: Found 0 / 1 Jan 25 09:42:04.191: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:04.191: INFO: Found 0 / 1 Jan 25 09:42:05.214: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:05.214: INFO: Found 1 / 1 Jan 25 09:42:05.214: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 25 09:42:05.218: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:05.218: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 25 09:42:05.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-6kx6l --namespace=kubectl-4037 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 25 09:42:05.354: INFO: stderr: "" Jan 25 09:42:05.354: INFO: stdout: "pod/agnhost-master-6kx6l patched\n" STEP: checking annotations Jan 25 09:42:05.376: INFO: Selector matched 1 pods for map[app:agnhost] Jan 25 09:42:05.376: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:42:05.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4037" for this suite. • [SLOW TEST:9.668 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":279,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:42:05.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-ef60fbe7-176b-41c8-aa24-71868d0de44c STEP: Creating a pod to test consume secrets Jan 25 09:42:05.505: INFO: Waiting up to 5m0s for pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416" in namespace "secrets-6475" to be "success or failure" Jan 25 09:42:05.538: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Pending", Reason="", readiness=false. Elapsed: 32.379453ms Jan 25 09:42:07.544: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038915197s Jan 25 09:42:09.552: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046221098s Jan 25 09:42:11.562: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056779206s Jan 25 09:42:13.574: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068486658s Jan 25 09:42:15.582: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077057585s STEP: Saw pod success Jan 25 09:42:15.583: INFO: Pod "pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416" satisfied condition "success or failure" Jan 25 09:42:15.587: INFO: Trying to get logs from node jerma-node pod pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416 container secret-volume-test: STEP: delete the pod Jan 25 09:42:15.792: INFO: Waiting for pod pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416 to disappear Jan 25 09:42:15.799: INFO: Pod pod-secrets-ea3b147d-4b30-49f5-9097-9c0bd033c416 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:42:15.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6475" for this suite. • [SLOW TEST:10.427 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":279,"completed":2,"skipped":23,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:42:15.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 09:42:16.502: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 09:42:18.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:42:20.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:42:22.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542136, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 09:42:25.641: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:42:25.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3911" for this suite. STEP: Destroying namespace "webhook-3911-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.183 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":279,"completed":3,"skipped":23,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:42:26.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:42:26.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 25 09:42:26.250: INFO: stderr: "" Jan 25 09:42:26.251: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.147+98f63eee1bf251\", GitCommit:\"98f63eee1bf25123785d26ff565968270f68afd1\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T09:15:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:42:26.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2546" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":279,"completed":4,"skipped":35,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:42:26.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-151.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-151.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-151.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-151.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-151.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 09:42:40.584: INFO: DNS probes using dns-151/dns-test-0281adb8-9a9a-4db2-b890-a6aa47d94dbe succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:42:40.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-151" for this suite. • [SLOW TEST:14.400 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":279,"completed":5,"skipped":48,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:42:40.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:42:40.753: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 25 09:42:40.917: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 25 09:42:45.937: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 25 09:42:51.951: INFO: Creating deployment "test-rolling-update-deployment" Jan 25 09:42:51.964: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 25 09:42:51.987: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 25 09:42:54.006: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 25 09:42:54.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542171, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:42:56.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542171, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:42:58.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542171, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:43:00.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542172, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542171, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:43:02.033: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jan 25 09:43:02.128: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9514 /apis/apps/v1/namespaces/deployment-9514/deployments/test-rolling-update-deployment dae5235a-5eb9-4062-bfe7-2e4c70dbb1c5 4211891 1 2020-01-25 09:42:51 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031627d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 09:42:52 +0000 UTC,LastTransitionTime:2020-01-25 09:42:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-25 09:43:00 +0000 UTC,LastTransitionTime:2020-01-25 09:42:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 25 09:43:02.137: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-9514 /apis/apps/v1/namespaces/deployment-9514/replicasets/test-rolling-update-deployment-67cf4f6444 9fa8ac34-8bd9-4c85-ac56-7bab82b7f6a0 4211878 1 2020-01-25 09:42:51 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment dae5235a-5eb9-4062-bfe7-2e4c70dbb1c5 0xc003162c67 0xc003162c68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003162cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 25 09:43:02.138: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 25 09:43:02.138: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9514 /apis/apps/v1/namespaces/deployment-9514/replicasets/test-rolling-update-controller b3a9edf9-d0e0-4b8e-9765-e1d7ae0556c6 4211890 2 2020-01-25 09:42:40 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment dae5235a-5eb9-4062-bfe7-2e4c70dbb1c5 0xc003162b7f 0xc003162b90}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003162bf8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 09:43:02.143: INFO: Pod "test-rolling-update-deployment-67cf4f6444-55ssw" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-55ssw test-rolling-update-deployment-67cf4f6444- deployment-9514 /api/v1/namespaces/deployment-9514/pods/test-rolling-update-deployment-67cf4f6444-55ssw d9b12c1f-d13a-4980-84d2-adb00b353529 4211877 0 2020-01-25 09:42:51 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 9fa8ac34-8bd9-4c85-ac56-7bab82b7f6a0 0xc0030c3327 0xc0030c3328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dk9gf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dk9gf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dk9gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 09:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 09:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 09:42:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 09:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 09:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 09:42:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://759496f06693b42d60aa5a3bd11e3cee2d9d2cf71d9f7f1882393fedf785e7bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:43:02.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9514" for this suite. • [SLOW TEST:21.492 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":279,"completed":6,"skipped":62,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:43:02.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jan 25 09:43:03.165: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jan 25 09:43:05.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:43:07.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:43:09.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:43:11.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715542183, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 09:43:14.221: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:43:14.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:43:15.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-919" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.997 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":279,"completed":7,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:43:16.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1870 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1870 STEP: creating replication controller externalsvc in namespace services-1870 I0125 09:43:16.519741 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1870, replica count: 2 I0125 09:43:19.571678 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 09:43:22.572473 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 09:43:25.573042 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 09:43:28.573868 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 25 09:43:28.618: INFO: Creating new exec pod Jan 25 09:43:36.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1870 execpodplbqc -- /bin/sh -x -c nslookup clusterip-service' Jan 25 09:43:37.055: INFO: stderr: "I0125 09:43:36.834671 101 log.go:172] (0xc0009e0d10) (0xc000c3c320) Create stream\nI0125 09:43:36.834894 101 log.go:172] (0xc0009e0d10) (0xc000c3c320) Stream added, broadcasting: 1\nI0125 09:43:36.841668 101 log.go:172] (0xc0009e0d10) Reply frame received for 1\nI0125 09:43:36.841704 101 log.go:172] (0xc0009e0d10) (0xc000a92320) Create stream\nI0125 09:43:36.841713 101 log.go:172] (0xc0009e0d10) (0xc000a92320) Stream added, broadcasting: 3\nI0125 09:43:36.845035 101 log.go:172] (0xc0009e0d10) Reply frame received for 3\nI0125 09:43:36.845121 101 log.go:172] (0xc0009e0d10) (0xc0009ca280) Create stream\nI0125 09:43:36.845145 101 log.go:172] (0xc0009e0d10) (0xc0009ca280) Stream added, broadcasting: 5\nI0125 09:43:36.846795 101 log.go:172] (0xc0009e0d10) Reply frame received for 5\nI0125 09:43:36.940438 101 log.go:172] (0xc0009e0d10) Data frame received for 5\nI0125 09:43:36.940643 101 log.go:172] (0xc0009ca280) (5) Data frame handling\nI0125 09:43:36.940681 101 log.go:172] (0xc0009ca280) (5) Data frame sent\n+ nslookup clusterip-service\nI0125 09:43:36.961040 101 log.go:172] (0xc0009e0d10) Data frame received for 3\nI0125 09:43:36.961167 101 log.go:172] (0xc000a92320) (3) Data frame handling\nI0125 09:43:36.961217 101 log.go:172] (0xc000a92320) (3) Data frame sent\nI0125 09:43:36.964940 101 log.go:172] (0xc0009e0d10) Data frame received for 3\nI0125 09:43:36.964966 101 log.go:172] (0xc000a92320) (3) Data frame handling\nI0125 09:43:36.964978 101 log.go:172] (0xc000a92320) (3) Data frame sent\nI0125 09:43:37.042922 101 log.go:172] (0xc0009e0d10) Data frame received for 1\nI0125 09:43:37.043015 101 log.go:172] (0xc0009e0d10) (0xc0009ca280) Stream removed, broadcasting: 5\nI0125 09:43:37.043039 101 log.go:172] (0xc000c3c320) (1) Data frame handling\nI0125 09:43:37.043053 101 log.go:172] (0xc000c3c320) (1) Data frame sent\nI0125 09:43:37.043076 101 log.go:172] (0xc0009e0d10) (0xc000a92320) Stream removed, broadcasting: 3\nI0125 09:43:37.043099 101 log.go:172] (0xc0009e0d10) (0xc000c3c320) Stream removed, broadcasting: 1\nI0125 09:43:37.043163 101 log.go:172] (0xc0009e0d10) Go away received\nI0125 09:43:37.043780 101 log.go:172] (0xc0009e0d10) (0xc000c3c320) Stream removed, broadcasting: 1\nI0125 09:43:37.043796 101 log.go:172] (0xc0009e0d10) (0xc000a92320) Stream removed, broadcasting: 3\nI0125 09:43:37.043804 101 log.go:172] (0xc0009e0d10) (0xc0009ca280) Stream removed, broadcasting: 5\n" Jan 25 09:43:37.056: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1870.svc.cluster.local\tcanonical name = externalsvc.services-1870.svc.cluster.local.\nName:\texternalsvc.services-1870.svc.cluster.local\nAddress: 10.96.15.24\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1870, will wait for the garbage collector to delete the pods Jan 25 09:43:37.122: INFO: Deleting ReplicationController externalsvc took: 7.807763ms Jan 25 09:43:37.423: INFO: Terminating ReplicationController externalsvc pods took: 300.90828ms Jan 25 09:43:52.556: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:43:52.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1870" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:36.504 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":279,"completed":8,"skipped":90,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:43:52.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 25 09:43:52.952: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9963 /api/v1/namespaces/watch-9963/configmaps/e2e-watch-test-watch-closed cf227800-9f46-46c5-81e9-63d9b8f93ed5 4212161 0 2020-01-25 09:43:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:43:52.956: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9963 /api/v1/namespaces/watch-9963/configmaps/e2e-watch-test-watch-closed cf227800-9f46-46c5-81e9-63d9b8f93ed5 4212163 0 2020-01-25 09:43:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 25 09:43:53.128: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9963 /api/v1/namespaces/watch-9963/configmaps/e2e-watch-test-watch-closed cf227800-9f46-46c5-81e9-63d9b8f93ed5 4212165 0 2020-01-25 09:43:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:43:53.129: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9963 /api/v1/namespaces/watch-9963/configmaps/e2e-watch-test-watch-closed cf227800-9f46-46c5-81e9-63d9b8f93ed5 4212166 0 2020-01-25 09:43:52 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:43:53.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9963" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":279,"completed":9,"skipped":100,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:43:53.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jan 25 09:44:05.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3274 PodName:pod-sharedvolume-fea92b2f-b64e-4c06-8011-2be4f56869f3 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 09:44:05.367: INFO: >>> kubeConfig: /root/.kube/config I0125 09:44:05.429271 9 log.go:172] (0xc001d40840) (0xc002b71900) Create stream I0125 09:44:05.429462 9 log.go:172] (0xc001d40840) (0xc002b71900) Stream added, broadcasting: 1 I0125 09:44:05.437568 9 log.go:172] (0xc001d40840) Reply frame received for 1 I0125 09:44:05.437697 9 log.go:172] (0xc001d40840) (0xc002adb680) Create stream I0125 09:44:05.437723 9 log.go:172] (0xc001d40840) (0xc002adb680) Stream added, broadcasting: 3 I0125 09:44:05.440232 9 log.go:172] (0xc001d40840) Reply frame received for 3 I0125 09:44:05.440321 9 log.go:172] (0xc001d40840) (0xc0023463c0) Create stream I0125 09:44:05.440340 9 log.go:172] (0xc001d40840) (0xc0023463c0) Stream added, broadcasting: 5 I0125 09:44:05.443542 9 log.go:172] (0xc001d40840) Reply frame received for 5 I0125 09:44:05.539976 9 log.go:172] (0xc001d40840) Data frame received for 3 I0125 09:44:05.540064 9 log.go:172] (0xc002adb680) (3) Data frame handling I0125 09:44:05.540120 9 log.go:172] (0xc002adb680) (3) Data frame sent I0125 09:44:05.617959 9 log.go:172] (0xc001d40840) (0xc002adb680) Stream removed, broadcasting: 3 I0125 09:44:05.618078 9 log.go:172] (0xc001d40840) Data frame received for 1 I0125 09:44:05.618090 9 log.go:172] (0xc002b71900) (1) Data frame handling I0125 09:44:05.618102 9 log.go:172] (0xc002b71900) (1) Data frame sent I0125 09:44:05.618200 9 log.go:172] (0xc001d40840) (0xc002b71900) Stream removed, broadcasting: 1 I0125 09:44:05.618694 9 log.go:172] (0xc001d40840) (0xc0023463c0) Stream removed, broadcasting: 5 I0125 09:44:05.618725 9 log.go:172] (0xc001d40840) Go away received I0125 09:44:05.619127 9 log.go:172] (0xc001d40840) (0xc002b71900) Stream removed, broadcasting: 1 I0125 09:44:05.619213 9 log.go:172] (0xc001d40840) (0xc002adb680) Stream removed, broadcasting: 3 I0125 09:44:05.619231 9 log.go:172] (0xc001d40840) (0xc0023463c0) Stream removed, broadcasting: 5 Jan 25 09:44:05.619: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3274" for this suite. • [SLOW TEST:12.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":279,"completed":10,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:05.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:44:05.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b" in namespace "downward-api-7553" to be "success or failure" Jan 25 09:44:05.742: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.102624ms Jan 25 09:44:07.750: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017580384s Jan 25 09:44:09.759: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026243295s Jan 25 09:44:11.765: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032715271s Jan 25 09:44:14.661: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.928734479s STEP: Saw pod success Jan 25 09:44:14.662: INFO: Pod "downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b" satisfied condition "success or failure" Jan 25 09:44:14.676: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b container client-container: STEP: delete the pod Jan 25 09:44:14.938: INFO: Waiting for pod downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b to disappear Jan 25 09:44:14.955: INFO: Pod downwardapi-volume-40346aa3-006a-425d-ac9d-ec0de56e8c8b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:14.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7553" for this suite. • [SLOW TEST:9.326 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":279,"completed":11,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:14.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:44:15.072: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Jan 25 09:44:18.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 create -f -' Jan 25 09:44:20.758: INFO: stderr: "" Jan 25 09:44:20.758: INFO: stdout: "e2e-test-crd-publish-openapi-6227-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 25 09:44:20.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 delete e2e-test-crd-publish-openapi-6227-crds test-foo' Jan 25 09:44:20.882: INFO: stderr: "" Jan 25 09:44:20.882: INFO: stdout: "e2e-test-crd-publish-openapi-6227-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 25 09:44:20.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 apply -f -' Jan 25 09:44:21.187: INFO: stderr: "" Jan 25 09:44:21.187: INFO: stdout: "e2e-test-crd-publish-openapi-6227-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 25 09:44:21.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 delete e2e-test-crd-publish-openapi-6227-crds test-foo' Jan 25 09:44:21.283: INFO: stderr: "" Jan 25 09:44:21.283: INFO: stdout: "e2e-test-crd-publish-openapi-6227-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 25 09:44:21.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 create -f -' Jan 25 09:44:21.662: INFO: rc: 1 Jan 25 09:44:21.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 apply -f -' Jan 25 09:44:21.997: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Jan 25 09:44:21.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 create -f -' Jan 25 09:44:22.265: INFO: rc: 1 Jan 25 09:44:22.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5853 apply -f -' Jan 25 09:44:22.741: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Jan 25 09:44:22.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6227-crds' Jan 25 09:44:23.101: INFO: stderr: "" Jan 25 09:44:23.102: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6227-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Jan 25 09:44:23.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6227-crds.metadata' Jan 25 09:44:23.465: INFO: stderr: "" Jan 25 09:44:23.465: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6227-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 25 09:44:23.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6227-crds.spec' Jan 25 09:44:23.918: INFO: stderr: "" Jan 25 09:44:23.918: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6227-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 25 09:44:23.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6227-crds.spec.bars' Jan 25 09:44:24.242: INFO: stderr: "" Jan 25 09:44:24.243: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6227-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Jan 25 09:44:24.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6227-crds.spec.bars2' Jan 25 09:44:24.586: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:28.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5853" for this suite. • [SLOW TEST:13.257 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":279,"completed":12,"skipped":278,"failed":0} SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:28.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:28.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6177" for this suite. STEP: Destroying namespace "nspatchtest-a4bc0057-39b8-45ac-9745-a84daea1e68a-3821" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":279,"completed":13,"skipped":280,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:28.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:36.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4466" for this suite. • [SLOW TEST:8.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":279,"completed":14,"skipped":285,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:36.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override arguments Jan 25 09:44:36.949: INFO: Waiting up to 5m0s for pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c" in namespace "containers-9247" to be "success or failure" Jan 25 09:44:36.969: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.578471ms Jan 25 09:44:38.976: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026835838s Jan 25 09:44:40.985: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036087298s Jan 25 09:44:42.994: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044816751s Jan 25 09:44:45.000: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050943832s STEP: Saw pod success Jan 25 09:44:45.000: INFO: Pod "client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c" satisfied condition "success or failure" Jan 25 09:44:45.003: INFO: Trying to get logs from node jerma-node pod client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c container test-container: STEP: delete the pod Jan 25 09:44:45.040: INFO: Waiting for pod client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c to disappear Jan 25 09:44:45.044: INFO: Pod client-containers-f848f056-607f-4e4a-8be4-3e45766a6a9c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:45.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9247" for this suite. • [SLOW TEST:8.348 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":279,"completed":15,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:45.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8017.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8017.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 09:44:57.278: INFO: DNS probes using dns-8017/dns-test-bb21df2c-8d17-4057-9e89-34666a4c202b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:44:57.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8017" for this suite. • [SLOW TEST:12.356 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":279,"completed":16,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:44:57.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 25 09:44:57.767: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:45:14.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5893" for this suite. • [SLOW TEST:17.117 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":279,"completed":17,"skipped":352,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:45:14.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Jan 25 09:45:15.244: INFO: created pod pod-service-account-defaultsa Jan 25 09:45:15.244: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 25 09:45:15.272: INFO: created pod pod-service-account-mountsa Jan 25 09:45:15.273: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 25 09:45:15.294: INFO: created pod pod-service-account-nomountsa Jan 25 09:45:15.294: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 25 09:45:15.444: INFO: created pod pod-service-account-defaultsa-mountspec Jan 25 09:45:15.444: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 25 09:45:15.520: INFO: created pod pod-service-account-mountsa-mountspec Jan 25 09:45:15.520: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 25 09:45:15.572: INFO: created pod pod-service-account-nomountsa-mountspec Jan 25 09:45:15.572: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 25 09:45:15.584: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 25 09:45:15.585: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 25 09:45:15.635: INFO: created pod pod-service-account-mountsa-nomountspec Jan 25 09:45:15.635: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 25 09:45:15.678: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 25 09:45:15.679: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:45:15.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2934" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":279,"completed":18,"skipped":354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:45:15.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:45:20.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673" in namespace "projected-6932" to be "success or failure" Jan 25 09:45:20.781: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 547.167765ms Jan 25 09:45:22.818: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 2.584306752s Jan 25 09:45:27.265: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 7.031160888s Jan 25 09:45:29.883: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 9.649213442s Jan 25 09:45:32.253: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019693952s Jan 25 09:45:34.373: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 14.139760033s Jan 25 09:45:36.390: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 16.156117192s Jan 25 09:45:38.418: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 18.184833799s Jan 25 09:45:40.428: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 20.194164429s Jan 25 09:45:42.436: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Pending", Reason="", readiness=false. Elapsed: 22.203022283s Jan 25 09:45:44.446: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.213056389s STEP: Saw pod success Jan 25 09:45:44.447: INFO: Pod "downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673" satisfied condition "success or failure" Jan 25 09:45:44.451: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673 container client-container: STEP: delete the pod Jan 25 09:45:44.522: INFO: Waiting for pod downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673 to disappear Jan 25 09:45:44.535: INFO: Pod downwardapi-volume-5bdaa6ac-7d0e-422f-91a2-966c8b0f1673 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:45:44.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6932" for this suite. • [SLOW TEST:28.752 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":279,"completed":19,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:45:44.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Jan 25 09:45:44.806: INFO: Waiting up to 5m0s for pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e" in namespace "emptydir-9997" to be "success or failure" Jan 25 09:45:44.813: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.02992ms Jan 25 09:45:46.822: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015810324s Jan 25 09:45:48.830: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024329061s Jan 25 09:45:50.837: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031557272s Jan 25 09:45:52.846: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040039066s STEP: Saw pod success Jan 25 09:45:52.846: INFO: Pod "pod-6489f3a5-300b-4934-9b6e-438c8313349e" satisfied condition "success or failure" Jan 25 09:45:52.853: INFO: Trying to get logs from node jerma-node pod pod-6489f3a5-300b-4934-9b6e-438c8313349e container test-container: STEP: delete the pod Jan 25 09:45:52.912: INFO: Waiting for pod pod-6489f3a5-300b-4934-9b6e-438c8313349e to disappear Jan 25 09:45:52.924: INFO: Pod pod-6489f3a5-300b-4934-9b6e-438c8313349e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:45:52.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9997" for this suite. • [SLOW TEST:8.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":20,"skipped":445,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:45:52.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Jan 25 09:45:53.070: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:45:53.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5949" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":279,"completed":21,"skipped":447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:45:53.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 25 09:46:04.245: INFO: Successfully updated pod "pod-update-727d973f-84a7-4d41-8045-26beab959cf4" STEP: verifying the updated pod is in kubernetes Jan 25 09:46:04.345: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:46:04.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-482" for this suite. • [SLOW TEST:11.174 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":279,"completed":22,"skipped":473,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:46:04.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 25 09:46:14.731: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:46:14.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7527" for this suite. • [SLOW TEST:10.442 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":279,"completed":23,"skipped":479,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:46:14.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:46:15.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6" in namespace "downward-api-7020" to be "success or failure" Jan 25 09:46:15.128: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 41.726662ms Jan 25 09:46:17.134: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047859772s Jan 25 09:46:19.142: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055042688s Jan 25 09:46:21.152: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065166744s Jan 25 09:46:23.160: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073850322s Jan 25 09:46:25.168: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080920762s STEP: Saw pod success Jan 25 09:46:25.168: INFO: Pod "downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6" satisfied condition "success or failure" Jan 25 09:46:25.171: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6 container client-container: STEP: delete the pod Jan 25 09:46:25.251: INFO: Waiting for pod downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6 to disappear Jan 25 09:46:25.256: INFO: Pod downwardapi-volume-54179fa4-687b-44ea-80e9-94cd6a9200d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:46:25.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7020" for this suite. • [SLOW TEST:10.452 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":279,"completed":24,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:46:25.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9283 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Jan 25 09:46:25.450: INFO: Found 0 stateful pods, waiting for 3 Jan 25 09:46:35.462: INFO: Found 2 stateful pods, waiting for 3 Jan 25 09:46:45.459: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:46:45.459: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:46:45.459: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 09:46:55.465: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:46:55.465: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:46:55.465: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 25 09:46:55.512: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 25 09:47:05.580: INFO: Updating stateful set ss2 Jan 25 09:47:05.653: INFO: Waiting for Pod statefulset-9283/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Jan 25 09:47:16.111: INFO: Found 2 stateful pods, waiting for 3 Jan 25 09:47:26.139: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:47:26.139: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:47:26.139: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 25 09:47:36.122: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:47:36.122: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 25 09:47:36.122: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 25 09:47:36.168: INFO: Updating stateful set ss2 Jan 25 09:47:36.286: INFO: Waiting for Pod statefulset-9283/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 09:47:46.777: INFO: Updating stateful set ss2 Jan 25 09:47:46.807: INFO: Waiting for StatefulSet statefulset-9283/ss2 to complete update Jan 25 09:47:46.807: INFO: Waiting for Pod statefulset-9283/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 25 09:47:56.869: INFO: Waiting for StatefulSet statefulset-9283/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jan 25 09:48:06.823: INFO: Deleting all statefulset in ns statefulset-9283 Jan 25 09:48:06.828: INFO: Scaling statefulset ss2 to 0 Jan 25 09:48:46.865: INFO: Waiting for statefulset status.replicas updated to 0 Jan 25 09:48:46.872: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:48:46.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9283" for this suite. • [SLOW TEST:141.681 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":279,"completed":25,"skipped":576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:48:46.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating all guestbook components Jan 25 09:48:47.013: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Jan 25 09:48:47.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:47.491: INFO: stderr: "" Jan 25 09:48:47.491: INFO: stdout: "service/agnhost-slave created\n" Jan 25 09:48:47.492: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Jan 25 09:48:47.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:47.895: INFO: stderr: "" Jan 25 09:48:47.895: INFO: stdout: "service/agnhost-master created\n" Jan 25 09:48:47.896: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 25 09:48:47.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:48.330: INFO: stderr: "" Jan 25 09:48:48.331: INFO: stdout: "service/frontend created\n" Jan 25 09:48:48.332: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 25 09:48:48.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:48.675: INFO: stderr: "" Jan 25 09:48:48.675: INFO: stdout: "deployment.apps/frontend created\n" Jan 25 09:48:48.676: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 25 09:48:48.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:49.244: INFO: stderr: "" Jan 25 09:48:49.244: INFO: stdout: "deployment.apps/agnhost-master created\n" Jan 25 09:48:49.245: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 25 09:48:49.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2814' Jan 25 09:48:49.637: INFO: stderr: "" Jan 25 09:48:49.637: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Jan 25 09:48:49.637: INFO: Waiting for all frontend pods to be Running. Jan 25 09:49:09.689: INFO: Waiting for frontend to serve content. Jan 25 09:49:09.722: INFO: Trying to add a new entry to the guestbook. Jan 25 09:49:09.735: INFO: Verifying that added entry can be retrieved. Jan 25 09:49:09.749: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Jan 25 09:49:14.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.040: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.040: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Jan 25 09:49:15.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.201: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.201: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 25 09:49:15.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.385: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.385: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 25 09:49:15.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.526: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.526: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 25 09:49:15.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.658: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.659: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Jan 25 09:49:15.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2814' Jan 25 09:49:15.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:49:15.810: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:49:15.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2814" for this suite. • [SLOW TEST:28.877 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":279,"completed":26,"skipped":658,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:49:15.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 09:49:16.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7967' Jan 25 09:49:16.347: INFO: stderr: "" Jan 25 09:49:16.348: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868 Jan 25 09:49:18.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7967' Jan 25 09:49:23.658: INFO: stderr: "" Jan 25 09:49:23.658: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:49:23.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7967" for this suite. • [SLOW TEST:8.017 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1859 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":279,"completed":27,"skipped":659,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:49:23.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 25 09:49:35.415: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:49:35.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1420" for this suite. • [SLOW TEST:11.627 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":279,"completed":28,"skipped":662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:49:35.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0125 09:49:46.614661 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 09:49:46.614: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:49:46.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3614" for this suite. • [SLOW TEST:11.149 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":279,"completed":29,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:49:46.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:49:55.039: INFO: Waiting up to 5m0s for pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343" in namespace "pods-2537" to be "success or failure" Jan 25 09:49:55.095: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343": Phase="Pending", Reason="", readiness=false. Elapsed: 55.490038ms Jan 25 09:49:57.104: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06379216s Jan 25 09:49:59.169: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12915029s Jan 25 09:50:01.177: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136985593s Jan 25 09:50:03.184: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144537801s STEP: Saw pod success Jan 25 09:50:03.185: INFO: Pod "client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343" satisfied condition "success or failure" Jan 25 09:50:03.282: INFO: Trying to get logs from node jerma-node pod client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343 container env3cont: STEP: delete the pod Jan 25 09:50:03.357: INFO: Waiting for pod client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343 to disappear Jan 25 09:50:03.364: INFO: Pod client-envvars-dbb18222-5ff1-45e0-9fb1-d1fdd5066343 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:50:03.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2537" for this suite. • [SLOW TEST:16.745 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":279,"completed":30,"skipped":704,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:50:03.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4644.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.124.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.124.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.124.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.124.8_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4644.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4644.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4644.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4644.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 8.124.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.124.8_udp@PTR;check="$$(dig +tcp +noall +answer +search 8.124.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.124.8_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 09:50:16.023: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.031: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.037: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.043: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.097: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.114: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.121: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:16.168: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:21.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.194: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.200: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.245: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.255: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.262: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:21.299: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:26.178: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.189: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.237: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.246: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.249: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:26.276: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:31.179: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.190: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.198: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.241: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.247: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.252: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.258: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:31.297: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:36.181: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.189: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.201: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.263: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.270: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.275: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:36.321: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:41.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.197: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.235: INFO: Unable to read jessie_udp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.246: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local from pod dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45: the server could not find the requested resource (get pods dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45) Jan 25 09:50:41.294: INFO: Lookups using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 failed for: [wheezy_udp@dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@dns-test-service.dns-4644.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_udp@dns-test-service.dns-4644.svc.cluster.local jessie_tcp@dns-test-service.dns-4644.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4644.svc.cluster.local] Jan 25 09:50:46.297: INFO: DNS probes using dns-4644/dns-test-261ef2dc-c0ef-4c4d-ab0a-4df86e289b45 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:50:46.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4644" for this suite. • [SLOW TEST:43.332 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":279,"completed":31,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:50:46.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 09:50:46.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9309' Jan 25 09:50:47.012: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 09:50:47.013: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 25 09:50:47.028: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 25 09:50:47.265: INFO: scanned /root for discovery docs: Jan 25 09:50:47.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9309' Jan 25 09:51:11.569: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 25 09:51:11.569: INFO: stdout: "Created e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a\nScaling up e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 25 09:51:11.570: INFO: stdout: "Created e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a\nScaling up e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 25 09:51:11.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9309' Jan 25 09:51:11.756: INFO: stderr: "" Jan 25 09:51:11.756: INFO: stdout: "e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a-n9jwl " Jan 25 09:51:11.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a-n9jwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9309' Jan 25 09:51:11.908: INFO: stderr: "" Jan 25 09:51:11.909: INFO: stdout: "true" Jan 25 09:51:11.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a-n9jwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9309' Jan 25 09:51:12.005: INFO: stderr: "" Jan 25 09:51:12.005: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 25 09:51:12.005: INFO: e2e-test-httpd-rc-5f04a2ecf7ea2971e91837662d22b19a-n9jwl is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Jan 25 09:51:12.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9309' Jan 25 09:51:12.161: INFO: stderr: "" Jan 25 09:51:12.161: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:51:12.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9309" for this suite. • [SLOW TEST:25.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":279,"completed":32,"skipped":741,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:51:12.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override command Jan 25 09:51:12.404: INFO: Waiting up to 5m0s for pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496" in namespace "containers-9697" to be "success or failure" Jan 25 09:51:12.420: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Pending", Reason="", readiness=false. Elapsed: 16.133293ms Jan 25 09:51:14.429: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024385899s Jan 25 09:51:16.437: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032594266s Jan 25 09:51:18.445: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040671645s Jan 25 09:51:20.456: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051395921s Jan 25 09:51:22.464: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059408624s STEP: Saw pod success Jan 25 09:51:22.464: INFO: Pod "client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496" satisfied condition "success or failure" Jan 25 09:51:22.467: INFO: Trying to get logs from node jerma-node pod client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496 container test-container: STEP: delete the pod Jan 25 09:51:22.639: INFO: Waiting for pod client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496 to disappear Jan 25 09:51:22.651: INFO: Pod client-containers-355ef804-4d2d-402a-bf37-d7fb768d7496 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:51:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9697" for this suite. • [SLOW TEST:10.387 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":279,"completed":33,"skipped":744,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:51:22.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-e92f5af3-7e60-4672-9aa8-5674934f0b34 STEP: Creating a pod to test consume secrets Jan 25 09:51:22.811: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6" in namespace "projected-3717" to be "success or failure" Jan 25 09:51:22.816: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104569ms Jan 25 09:51:24.826: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014146951s Jan 25 09:51:26.859: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047252461s Jan 25 09:51:29.131: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.319641054s Jan 25 09:51:31.184: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.372804828s STEP: Saw pod success Jan 25 09:51:31.185: INFO: Pod "pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6" satisfied condition "success or failure" Jan 25 09:51:31.191: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6 container projected-secret-volume-test: STEP: delete the pod Jan 25 09:51:31.250: INFO: Waiting for pod pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6 to disappear Jan 25 09:51:31.254: INFO: Pod pod-projected-secrets-9243d5ea-7e5c-4de9-af2b-cd61cdee21b6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:51:31.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3717" for this suite. • [SLOW TEST:8.605 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":34,"skipped":749,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:51:31.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:51:31.413: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 25 09:51:35.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7614 create -f -' Jan 25 09:51:37.474: INFO: stderr: "" Jan 25 09:51:37.474: INFO: stdout: "e2e-test-crd-publish-openapi-96-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 25 09:51:37.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7614 delete e2e-test-crd-publish-openapi-96-crds test-cr' Jan 25 09:51:37.640: INFO: stderr: "" Jan 25 09:51:37.641: INFO: stdout: "e2e-test-crd-publish-openapi-96-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 25 09:51:37.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7614 apply -f -' Jan 25 09:51:37.973: INFO: stderr: "" Jan 25 09:51:37.974: INFO: stdout: "e2e-test-crd-publish-openapi-96-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 25 09:51:37.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7614 delete e2e-test-crd-publish-openapi-96-crds test-cr' Jan 25 09:51:38.118: INFO: stderr: "" Jan 25 09:51:38.118: INFO: stdout: "e2e-test-crd-publish-openapi-96-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 25 09:51:38.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-96-crds' Jan 25 09:51:38.686: INFO: stderr: "" Jan 25 09:51:38.686: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-96-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:51:40.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7614" for this suite. • [SLOW TEST:9.705 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":279,"completed":35,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:51:40.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:51:41.136: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 25 09:51:41.181: INFO: Number of nodes with available pods: 0 Jan 25 09:51:41.181: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:43.161: INFO: Number of nodes with available pods: 0 Jan 25 09:51:43.161: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:43.195: INFO: Number of nodes with available pods: 0 Jan 25 09:51:43.195: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:44.200: INFO: Number of nodes with available pods: 0 Jan 25 09:51:44.200: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:45.201: INFO: Number of nodes with available pods: 0 Jan 25 09:51:45.201: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:48.077: INFO: Number of nodes with available pods: 0 Jan 25 09:51:48.077: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:48.259: INFO: Number of nodes with available pods: 0 Jan 25 09:51:48.260: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:49.419: INFO: Number of nodes with available pods: 0 Jan 25 09:51:49.419: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:50.221: INFO: Number of nodes with available pods: 0 Jan 25 09:51:50.221: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:51:51.202: INFO: Number of nodes with available pods: 2 Jan 25 09:51:51.202: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 25 09:51:51.251: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:51.251: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:52.387: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:52.387: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:53.388: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:53.388: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:54.521: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:54.522: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:55.387: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:55.387: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:56.389: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:56.390: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:57.388: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:57.388: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:58.393: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:58.393: INFO: Pod daemon-set-24nlx is not available Jan 25 09:51:58.393: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:59.385: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:51:59.385: INFO: Pod daemon-set-24nlx is not available Jan 25 09:51:59.385: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:00.386: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:00.386: INFO: Pod daemon-set-24nlx is not available Jan 25 09:52:00.386: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:01.387: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:01.387: INFO: Pod daemon-set-24nlx is not available Jan 25 09:52:01.387: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:02.386: INFO: Wrong image for pod: daemon-set-24nlx. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:02.386: INFO: Pod daemon-set-24nlx is not available Jan 25 09:52:02.386: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:03.388: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:03.389: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:04.385: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:04.385: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:05.385: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:05.385: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:06.387: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:06.387: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:07.524: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:07.524: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:08.398: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:08.398: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:09.386: INFO: Pod daemon-set-gqszv is not available Jan 25 09:52:09.386: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:10.391: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:11.385: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:12.384: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:13.385: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:14.386: INFO: Wrong image for pod: daemon-set-ppnlr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 25 09:52:14.386: INFO: Pod daemon-set-ppnlr is not available Jan 25 09:52:15.393: INFO: Pod daemon-set-t5mqx is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 25 09:52:15.416: INFO: Number of nodes with available pods: 1 Jan 25 09:52:15.416: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:16.436: INFO: Number of nodes with available pods: 1 Jan 25 09:52:16.437: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:17.440: INFO: Number of nodes with available pods: 1 Jan 25 09:52:17.440: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:18.438: INFO: Number of nodes with available pods: 1 Jan 25 09:52:18.439: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:19.434: INFO: Number of nodes with available pods: 1 Jan 25 09:52:19.434: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:20.439: INFO: Number of nodes with available pods: 1 Jan 25 09:52:20.439: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:52:21.432: INFO: Number of nodes with available pods: 2 Jan 25 09:52:21.432: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1891, will wait for the garbage collector to delete the pods Jan 25 09:52:21.519: INFO: Deleting DaemonSet.extensions daemon-set took: 9.683031ms Jan 25 09:52:21.921: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.029045ms Jan 25 09:52:28.874: INFO: Number of nodes with available pods: 0 Jan 25 09:52:28.874: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 09:52:28.884: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1891/daemonsets","resourceVersion":"4214642"},"items":null} Jan 25 09:52:28.891: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1891/pods","resourceVersion":"4214642"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:52:28.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1891" for this suite. • [SLOW TEST:47.950 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":279,"completed":36,"skipped":778,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:52:28.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:52:29.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267" in namespace "downward-api-4819" to be "success or failure" Jan 25 09:52:29.082: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267": Phase="Pending", Reason="", readiness=false. Elapsed: 6.744569ms Jan 25 09:52:31.099: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023693704s Jan 25 09:52:33.107: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031904024s Jan 25 09:52:35.117: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041958396s Jan 25 09:52:37.127: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052005914s STEP: Saw pod success Jan 25 09:52:37.127: INFO: Pod "downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267" satisfied condition "success or failure" Jan 25 09:52:37.133: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267 container client-container: STEP: delete the pod Jan 25 09:52:37.554: INFO: Waiting for pod downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267 to disappear Jan 25 09:52:37.560: INFO: Pod downwardapi-volume-dfd3f98e-cc29-4622-9a7f-6ce97b32c267 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:52:37.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4819" for this suite. • [SLOW TEST:8.641 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":279,"completed":37,"skipped":787,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:52:37.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap that has name configmap-test-emptyKey-a8f23d75-3a36-4239-a56b-a25d569053c9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:52:37.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4924" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":279,"completed":38,"skipped":805,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:52:37.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:53:23.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-294" for this suite. • [SLOW TEST:45.437 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":279,"completed":39,"skipped":818,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:53:23.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 25 09:53:30.106: INFO: 7 pods remaining Jan 25 09:53:30.107: INFO: 0 pods has nil DeletionTimestamp Jan 25 09:53:30.107: INFO: STEP: Gathering metrics W0125 09:53:31.253334 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 25 09:53:31.253: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:53:31.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9628" for this suite. • [SLOW TEST:8.172 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":279,"completed":40,"skipped":819,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:53:31.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 25 09:53:34.975: INFO: Number of nodes with available pods: 0 Jan 25 09:53:34.975: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:37.471: INFO: Number of nodes with available pods: 0 Jan 25 09:53:37.471: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:38.252: INFO: Number of nodes with available pods: 0 Jan 25 09:53:38.252: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:40.138: INFO: Number of nodes with available pods: 0 Jan 25 09:53:40.138: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:41.088: INFO: Number of nodes with available pods: 0 Jan 25 09:53:41.088: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:41.988: INFO: Number of nodes with available pods: 0 Jan 25 09:53:41.988: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:44.428: INFO: Number of nodes with available pods: 0 Jan 25 09:53:44.428: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:45.609: INFO: Number of nodes with available pods: 0 Jan 25 09:53:45.609: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:46.037: INFO: Number of nodes with available pods: 0 Jan 25 09:53:46.037: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:46.994: INFO: Number of nodes with available pods: 0 Jan 25 09:53:46.995: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:48.072: INFO: Number of nodes with available pods: 1 Jan 25 09:53:48.072: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:48.988: INFO: Number of nodes with available pods: 1 Jan 25 09:53:48.988: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:53:49.990: INFO: Number of nodes with available pods: 2 Jan 25 09:53:49.991: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 25 09:53:50.097: INFO: Number of nodes with available pods: 1 Jan 25 09:53:50.097: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:51.873: INFO: Number of nodes with available pods: 1 Jan 25 09:53:51.873: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:52.190: INFO: Number of nodes with available pods: 1 Jan 25 09:53:52.190: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:53.120: INFO: Number of nodes with available pods: 1 Jan 25 09:53:53.121: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:54.112: INFO: Number of nodes with available pods: 1 Jan 25 09:53:54.112: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:55.110: INFO: Number of nodes with available pods: 1 Jan 25 09:53:55.110: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:56.869: INFO: Number of nodes with available pods: 1 Jan 25 09:53:56.869: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:57.431: INFO: Number of nodes with available pods: 1 Jan 25 09:53:57.431: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:58.416: INFO: Number of nodes with available pods: 1 Jan 25 09:53:58.416: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:53:59.110: INFO: Number of nodes with available pods: 1 Jan 25 09:53:59.110: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:54:00.112: INFO: Number of nodes with available pods: 2 Jan 25 09:54:00.112: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8093, will wait for the garbage collector to delete the pods Jan 25 09:54:00.199: INFO: Deleting DaemonSet.extensions daemon-set took: 12.380911ms Jan 25 09:54:00.600: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.946054ms Jan 25 09:54:12.445: INFO: Number of nodes with available pods: 0 Jan 25 09:54:12.445: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 09:54:12.449: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8093/daemonsets","resourceVersion":"4215160"},"items":null} Jan 25 09:54:12.454: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8093/pods","resourceVersion":"4215160"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:54:12.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8093" for this suite. • [SLOW TEST:41.055 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":279,"completed":41,"skipped":845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:54:12.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-82c1f25c-6fba-4d4d-a6c4-ae91c7421e59 STEP: Creating a pod to test consume secrets Jan 25 09:54:12.640: INFO: Waiting up to 5m0s for pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4" in namespace "secrets-4249" to be "success or failure" Jan 25 09:54:12.809: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Pending", Reason="", readiness=false. Elapsed: 167.6947ms Jan 25 09:54:14.816: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174633688s Jan 25 09:54:16.823: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182303155s Jan 25 09:54:18.835: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194348432s Jan 25 09:54:20.847: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2060634s Jan 25 09:54:22.908: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.266647729s STEP: Saw pod success Jan 25 09:54:22.908: INFO: Pod "pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4" satisfied condition "success or failure" Jan 25 09:54:22.915: INFO: Trying to get logs from node jerma-node pod pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4 container secret-volume-test: STEP: delete the pod Jan 25 09:54:22.983: INFO: Waiting for pod pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4 to disappear Jan 25 09:54:22.989: INFO: Pod pod-secrets-3ef6cd4a-b324-4b2f-ae77-9c1e68a480e4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:54:22.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4249" for this suite. • [SLOW TEST:10.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":279,"completed":42,"skipped":869,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:54:23.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 25 09:54:23.100: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:54:36.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4074" for this suite. • [SLOW TEST:13.297 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":279,"completed":43,"skipped":881,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:54:36.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9207 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9207;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9207 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9207;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9207.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9207.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9207.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9207.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9207.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9207.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9207.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.163_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9207 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9207;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9207 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9207;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9207.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9207.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9207.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9207.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9207.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9207.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9207.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9207.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9207.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 163.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.163_udp@PTR;check="$$(dig +tcp +noall +answer +search 163.19.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.19.163_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 09:54:50.596: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.614: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.620: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.631: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.635: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.642: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.672: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.675: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.679: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.686: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.691: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.697: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.702: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:50.725: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:54:55.741: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.749: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.794: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.799: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.806: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.882: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.888: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.891: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.902: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.906: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.911: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.914: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:54:55.945: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:55:00.744: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.750: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.781: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.788: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.803: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.812: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.851: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.856: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.871: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.884: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.889: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.893: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.897: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:00.938: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:55:05.737: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.744: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.750: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.756: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.763: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.771: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.776: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.928: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.934: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.941: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.945: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.951: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.955: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.963: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:05.988: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:55:10.751: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.775: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.785: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.796: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.806: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.813: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.851: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.860: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.868: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.885: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.898: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.906: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:10.949: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:55:15.738: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.745: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.750: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.764: INFO: Unable to read wheezy_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.769: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.773: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.779: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.829: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.833: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.836: INFO: Unable to read jessie_udp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.840: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207 from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.847: INFO: Unable to read jessie_udp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.852: INFO: Unable to read jessie_tcp@dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.864: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc from pod dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5: the server could not find the requested resource (get pods dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5) Jan 25 09:55:15.931: INFO: Lookups using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9207 wheezy_tcp@dns-test-service.dns-9207 wheezy_udp@dns-test-service.dns-9207.svc wheezy_tcp@dns-test-service.dns-9207.svc wheezy_udp@_http._tcp.dns-test-service.dns-9207.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9207.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9207 jessie_tcp@dns-test-service.dns-9207 jessie_udp@dns-test-service.dns-9207.svc jessie_tcp@dns-test-service.dns-9207.svc jessie_udp@_http._tcp.dns-test-service.dns-9207.svc jessie_tcp@_http._tcp.dns-test-service.dns-9207.svc] Jan 25 09:55:20.848: INFO: DNS probes using dns-9207/dns-test-752a9271-cf21-4955-9d86-d6351c8de3e5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:55:21.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9207" for this suite. • [SLOW TEST:44.886 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":279,"completed":44,"skipped":889,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:55:21.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Jan 25 09:55:21.279: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix022478363/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:55:21.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3720" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":279,"completed":45,"skipped":900,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:55:21.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:55:21.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1" in namespace "projected-7582" to be "success or failure" Jan 25 09:55:21.668: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 79.200173ms Jan 25 09:55:23.675: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086279453s Jan 25 09:55:25.683: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094612835s Jan 25 09:55:27.693: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104178163s Jan 25 09:55:29.703: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113877476s Jan 25 09:55:31.712: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123101986s STEP: Saw pod success Jan 25 09:55:31.712: INFO: Pod "downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1" satisfied condition "success or failure" Jan 25 09:55:31.715: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1 container client-container: STEP: delete the pod Jan 25 09:55:31.752: INFO: Waiting for pod downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1 to disappear Jan 25 09:55:31.775: INFO: Pod downwardapi-volume-7b9e0761-c607-4ac3-b678-17272a4adbb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:55:31.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7582" for this suite. • [SLOW TEST:10.403 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":279,"completed":46,"skipped":916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:55:31.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 25 09:55:32.275: INFO: Number of nodes with available pods: 0 Jan 25 09:55:32.275: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:34.355: INFO: Number of nodes with available pods: 0 Jan 25 09:55:34.355: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:35.738: INFO: Number of nodes with available pods: 0 Jan 25 09:55:35.738: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:36.298: INFO: Number of nodes with available pods: 0 Jan 25 09:55:36.299: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:37.374: INFO: Number of nodes with available pods: 0 Jan 25 09:55:37.375: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:40.151: INFO: Number of nodes with available pods: 0 Jan 25 09:55:40.152: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:40.558: INFO: Number of nodes with available pods: 0 Jan 25 09:55:40.559: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:41.345: INFO: Number of nodes with available pods: 0 Jan 25 09:55:41.345: INFO: Node jerma-node is running more than one daemon pod Jan 25 09:55:42.308: INFO: Number of nodes with available pods: 1 Jan 25 09:55:42.308: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:43.295: INFO: Number of nodes with available pods: 2 Jan 25 09:55:43.295: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 25 09:55:43.336: INFO: Number of nodes with available pods: 1 Jan 25 09:55:43.336: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:44.351: INFO: Number of nodes with available pods: 1 Jan 25 09:55:44.351: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:45.370: INFO: Number of nodes with available pods: 1 Jan 25 09:55:45.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:46.347: INFO: Number of nodes with available pods: 1 Jan 25 09:55:46.348: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:47.351: INFO: Number of nodes with available pods: 1 Jan 25 09:55:47.352: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:48.528: INFO: Number of nodes with available pods: 1 Jan 25 09:55:48.528: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:49.352: INFO: Number of nodes with available pods: 1 Jan 25 09:55:49.352: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:50.348: INFO: Number of nodes with available pods: 1 Jan 25 09:55:50.349: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:51.351: INFO: Number of nodes with available pods: 1 Jan 25 09:55:51.351: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:52.347: INFO: Number of nodes with available pods: 1 Jan 25 09:55:52.347: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:53.351: INFO: Number of nodes with available pods: 1 Jan 25 09:55:53.351: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:54.347: INFO: Number of nodes with available pods: 1 Jan 25 09:55:54.348: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:55.352: INFO: Number of nodes with available pods: 1 Jan 25 09:55:55.353: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:56.351: INFO: Number of nodes with available pods: 1 Jan 25 09:55:56.351: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:57.976: INFO: Number of nodes with available pods: 1 Jan 25 09:55:57.977: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:58.779: INFO: Number of nodes with available pods: 1 Jan 25 09:55:58.779: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:55:59.350: INFO: Number of nodes with available pods: 1 Jan 25 09:55:59.350: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:56:00.355: INFO: Number of nodes with available pods: 1 Jan 25 09:56:00.355: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 09:56:01.350: INFO: Number of nodes with available pods: 2 Jan 25 09:56:01.351: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6243, will wait for the garbage collector to delete the pods Jan 25 09:56:01.423: INFO: Deleting DaemonSet.extensions daemon-set took: 13.368948ms Jan 25 09:56:01.724: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.397982ms Jan 25 09:56:13.161: INFO: Number of nodes with available pods: 0 Jan 25 09:56:13.161: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 09:56:13.167: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6243/daemonsets","resourceVersion":"4215666"},"items":null} Jan 25 09:56:13.171: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6243/pods","resourceVersion":"4215666"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:13.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6243" for this suite. • [SLOW TEST:41.553 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":279,"completed":47,"skipped":939,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:13.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Jan 25 09:56:13.416: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 09:56:13.474: INFO: Waiting for terminating namespaces to be deleted... Jan 25 09:56:13.478: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 09:56:13.489: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 09:56:13.489: INFO: Container weave ready: true, restart count 1 Jan 25 09:56:13.489: INFO: Container weave-npc ready: true, restart count 0 Jan 25 09:56:13.489: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.489: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 09:56:13.489: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 09:56:13.517: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 09:56:13.517: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 09:56:13.517: INFO: Container weave ready: true, restart count 0 Jan 25 09:56:13.517: INFO: Container weave-npc ready: true, restart count 0 Jan 25 09:56:13.517: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 09:56:13.517: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container kube-scheduler ready: true, restart count 3 Jan 25 09:56:13.517: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container etcd ready: true, restart count 1 Jan 25 09:56:13.517: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 09:56:13.517: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container coredns ready: true, restart count 0 Jan 25 09:56:13.517: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 09:56:13.517: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ed18ac8676e976], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:14.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6390" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":279,"completed":48,"skipped":950,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:14.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:56:14.687: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 25 09:56:18.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 create -f -' Jan 25 09:56:20.864: INFO: stderr: "" Jan 25 09:56:20.864: INFO: stdout: "e2e-test-crd-publish-openapi-4510-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 25 09:56:20.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-4510-crds test-cr' Jan 25 09:56:21.004: INFO: stderr: "" Jan 25 09:56:21.005: INFO: stdout: "e2e-test-crd-publish-openapi-4510-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 25 09:56:21.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 apply -f -' Jan 25 09:56:21.276: INFO: stderr: "" Jan 25 09:56:21.276: INFO: stdout: "e2e-test-crd-publish-openapi-4510-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 25 09:56:21.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4643 delete e2e-test-crd-publish-openapi-4510-crds test-cr' Jan 25 09:56:21.407: INFO: stderr: "" Jan 25 09:56:21.407: INFO: stdout: "e2e-test-crd-publish-openapi-4510-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 25 09:56:21.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4510-crds' Jan 25 09:56:21.872: INFO: stderr: "" Jan 25 09:56:21.872: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4510-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4643" for this suite. • [SLOW TEST:11.610 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":279,"completed":49,"skipped":970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:26.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 25 09:56:26.328: INFO: Waiting up to 5m0s for pod "pod-708285ef-0150-400a-9b88-567a989ec87f" in namespace "emptydir-8978" to be "success or failure" Jan 25 09:56:26.335: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.572973ms Jan 25 09:56:28.349: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021015131s Jan 25 09:56:30.359: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030531776s Jan 25 09:56:32.370: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04122047s Jan 25 09:56:34.375: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046425984s STEP: Saw pod success Jan 25 09:56:34.375: INFO: Pod "pod-708285ef-0150-400a-9b88-567a989ec87f" satisfied condition "success or failure" Jan 25 09:56:34.377: INFO: Trying to get logs from node jerma-node pod pod-708285ef-0150-400a-9b88-567a989ec87f container test-container: STEP: delete the pod Jan 25 09:56:34.581: INFO: Waiting for pod pod-708285ef-0150-400a-9b88-567a989ec87f to disappear Jan 25 09:56:34.590: INFO: Pod pod-708285ef-0150-400a-9b88-567a989ec87f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:34.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8978" for this suite. • [SLOW TEST:8.398 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":50,"skipped":1012,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:34.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:48.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-921" for this suite. • [SLOW TEST:14.132 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":279,"completed":51,"skipped":1015,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:48.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-1deb20ea-cd07-441b-a9dd-c3e4e68b9d76 STEP: Creating a pod to test consume configMaps Jan 25 09:56:48.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245" in namespace "configmap-6906" to be "success or failure" Jan 25 09:56:48.882: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245": Phase="Pending", Reason="", readiness=false. Elapsed: 11.841133ms Jan 25 09:56:50.889: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019079017s Jan 25 09:56:52.897: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02699861s Jan 25 09:56:54.991: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121361303s Jan 25 09:56:57.000: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129631632s STEP: Saw pod success Jan 25 09:56:57.000: INFO: Pod "pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245" satisfied condition "success or failure" Jan 25 09:56:57.096: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245 container configmap-volume-test: STEP: delete the pod Jan 25 09:56:57.145: INFO: Waiting for pod pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245 to disappear Jan 25 09:56:57.148: INFO: Pod pod-configmaps-4517e1e4-d73a-40e3-a3cd-91af37f6c245 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:56:57.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6906" for this suite. • [SLOW TEST:8.426 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":52,"skipped":1018,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:56:57.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 25 09:57:04.661: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:57:04.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9700" for this suite. • [SLOW TEST:7.579 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":279,"completed":53,"skipped":1037,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:57:04.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 09:57:04.926: INFO: Waiting up to 5m0s for pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e" in namespace "downward-api-963" to be "success or failure" Jan 25 09:57:04.940: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.77166ms Jan 25 09:57:06.949: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023251628s Jan 25 09:57:08.956: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030229902s Jan 25 09:57:10.964: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037462032s Jan 25 09:57:12.971: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044560521s Jan 25 09:57:14.983: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056411223s STEP: Saw pod success Jan 25 09:57:14.983: INFO: Pod "downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e" satisfied condition "success or failure" Jan 25 09:57:14.988: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e container client-container: STEP: delete the pod Jan 25 09:57:15.048: INFO: Waiting for pod downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e to disappear Jan 25 09:57:15.067: INFO: Pod downwardapi-volume-854cb11e-1936-4374-952c-b424185ea98e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:57:15.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-963" for this suite. • [SLOW TEST:10.342 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":279,"completed":54,"skipped":1050,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:57:15.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:57:15.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5744" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":279,"completed":55,"skipped":1071,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:57:15.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Jan 25 09:57:15.328: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 09:57:15.354: INFO: Waiting for terminating namespaces to be deleted... Jan 25 09:57:15.361: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 09:57:15.372: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.372: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 09:57:15.372: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 09:57:15.372: INFO: Container weave ready: true, restart count 1 Jan 25 09:57:15.372: INFO: Container weave-npc ready: true, restart count 0 Jan 25 09:57:15.372: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 09:57:15.482: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container coredns ready: true, restart count 0 Jan 25 09:57:15.482: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container coredns ready: true, restart count 0 Jan 25 09:57:15.482: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 09:57:15.482: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 09:57:15.482: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 09:57:15.482: INFO: Container weave ready: true, restart count 0 Jan 25 09:57:15.482: INFO: Container weave-npc ready: true, restart count 0 Jan 25 09:57:15.482: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container kube-scheduler ready: true, restart count 3 Jan 25 09:57:15.482: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 09:57:15.482: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 09:57:15.482: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4dfef32a-4242-4da9-96df-2e38bd871a0a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4dfef32a-4242-4da9-96df-2e38bd871a0a off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-4dfef32a-4242-4da9-96df-2e38bd871a0a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:57:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1825" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:18.607 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":279,"completed":56,"skipped":1071,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:57:33.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-3390 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 09:57:33.954: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 25 09:57:34.110: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 09:57:36.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 09:57:38.123: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 09:57:40.742: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 09:57:42.164: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:44.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:46.118: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:48.121: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:50.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:52.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:54.117: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:56.119: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 09:57:58.122: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 25 09:57:58.136: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 25 09:58:06.229: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3390 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 09:58:06.229: INFO: >>> kubeConfig: /root/.kube/config I0125 09:58:06.305140 9 log.go:172] (0xc005cff760) (0xc002c3d2c0) Create stream I0125 09:58:06.305452 9 log.go:172] (0xc005cff760) (0xc002c3d2c0) Stream added, broadcasting: 1 I0125 09:58:06.324077 9 log.go:172] (0xc005cff760) Reply frame received for 1 I0125 09:58:06.324205 9 log.go:172] (0xc005cff760) (0xc002c3d360) Create stream I0125 09:58:06.324230 9 log.go:172] (0xc005cff760) (0xc002c3d360) Stream added, broadcasting: 3 I0125 09:58:06.330161 9 log.go:172] (0xc005cff760) Reply frame received for 3 I0125 09:58:06.330233 9 log.go:172] (0xc005cff760) (0xc001868000) Create stream I0125 09:58:06.330282 9 log.go:172] (0xc005cff760) (0xc001868000) Stream added, broadcasting: 5 I0125 09:58:06.332779 9 log.go:172] (0xc005cff760) Reply frame received for 5 I0125 09:58:06.420565 9 log.go:172] (0xc005cff760) Data frame received for 3 I0125 09:58:06.420699 9 log.go:172] (0xc002c3d360) (3) Data frame handling I0125 09:58:06.420745 9 log.go:172] (0xc002c3d360) (3) Data frame sent I0125 09:58:06.501038 9 log.go:172] (0xc005cff760) Data frame received for 1 I0125 09:58:06.501564 9 log.go:172] (0xc002c3d2c0) (1) Data frame handling I0125 09:58:06.501639 9 log.go:172] (0xc002c3d2c0) (1) Data frame sent I0125 09:58:06.501709 9 log.go:172] (0xc005cff760) (0xc002c3d2c0) Stream removed, broadcasting: 1 I0125 09:58:06.502139 9 log.go:172] (0xc005cff760) (0xc002c3d360) Stream removed, broadcasting: 3 I0125 09:58:06.503110 9 log.go:172] (0xc005cff760) (0xc001868000) Stream removed, broadcasting: 5 I0125 09:58:06.503214 9 log.go:172] (0xc005cff760) Go away received I0125 09:58:06.503955 9 log.go:172] (0xc005cff760) (0xc002c3d2c0) Stream removed, broadcasting: 1 I0125 09:58:06.504118 9 log.go:172] (0xc005cff760) (0xc002c3d360) Stream removed, broadcasting: 3 I0125 09:58:06.504182 9 log.go:172] (0xc005cff760) (0xc001868000) Stream removed, broadcasting: 5 Jan 25 09:58:06.504: INFO: Found all expected endpoints: [netserver-0] Jan 25 09:58:06.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3390 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 09:58:06.517: INFO: >>> kubeConfig: /root/.kube/config I0125 09:58:06.582929 9 log.go:172] (0xc0063e42c0) (0xc001868460) Create stream I0125 09:58:06.583249 9 log.go:172] (0xc0063e42c0) (0xc001868460) Stream added, broadcasting: 1 I0125 09:58:06.598154 9 log.go:172] (0xc0063e42c0) Reply frame received for 1 I0125 09:58:06.598335 9 log.go:172] (0xc0063e42c0) (0xc002c3d5e0) Create stream I0125 09:58:06.598382 9 log.go:172] (0xc0063e42c0) (0xc002c3d5e0) Stream added, broadcasting: 3 I0125 09:58:06.601416 9 log.go:172] (0xc0063e42c0) Reply frame received for 3 I0125 09:58:06.601496 9 log.go:172] (0xc0063e42c0) (0xc0016c6640) Create stream I0125 09:58:06.601522 9 log.go:172] (0xc0063e42c0) (0xc0016c6640) Stream added, broadcasting: 5 I0125 09:58:06.604791 9 log.go:172] (0xc0063e42c0) Reply frame received for 5 I0125 09:58:06.762458 9 log.go:172] (0xc0063e42c0) Data frame received for 3 I0125 09:58:06.762875 9 log.go:172] (0xc002c3d5e0) (3) Data frame handling I0125 09:58:06.762951 9 log.go:172] (0xc002c3d5e0) (3) Data frame sent I0125 09:58:06.933783 9 log.go:172] (0xc0063e42c0) Data frame received for 1 I0125 09:58:06.934430 9 log.go:172] (0xc0063e42c0) (0xc002c3d5e0) Stream removed, broadcasting: 3 I0125 09:58:06.934601 9 log.go:172] (0xc001868460) (1) Data frame handling I0125 09:58:06.934750 9 log.go:172] (0xc001868460) (1) Data frame sent I0125 09:58:06.934892 9 log.go:172] (0xc0063e42c0) (0xc0016c6640) Stream removed, broadcasting: 5 I0125 09:58:06.935026 9 log.go:172] (0xc0063e42c0) (0xc001868460) Stream removed, broadcasting: 1 I0125 09:58:06.935102 9 log.go:172] (0xc0063e42c0) Go away received I0125 09:58:06.935717 9 log.go:172] (0xc0063e42c0) (0xc001868460) Stream removed, broadcasting: 1 I0125 09:58:06.935752 9 log.go:172] (0xc0063e42c0) (0xc002c3d5e0) Stream removed, broadcasting: 3 I0125 09:58:06.935772 9 log.go:172] (0xc0063e42c0) (0xc0016c6640) Stream removed, broadcasting: 5 Jan 25 09:58:06.935: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:58:06.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3390" for this suite. • [SLOW TEST:33.123 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":57,"skipped":1090,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:58:06.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 09:58:07.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5795' Jan 25 09:58:07.194: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 25 09:58:07.194: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Jan 25 09:58:07.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5795' Jan 25 09:58:07.367: INFO: stderr: "" Jan 25 09:58:07.367: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:58:07.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5795" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":279,"completed":58,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:58:07.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72 Jan 25 09:58:07.522: INFO: Pod name my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72: Found 0 pods out of 1 Jan 25 09:58:13.602: INFO: Pod name my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72: Found 1 pods out of 1 Jan 25 09:58:13.603: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72" are running Jan 25 09:58:23.646: INFO: Pod "my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72-2sjg7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 09:58:07 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 09:58:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 09:58:07 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 09:58:07 +0000 UTC Reason: Message:}]) Jan 25 09:58:23.647: INFO: Trying to dial the pod Jan 25 09:58:28.687: INFO: Controller my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72: Got expected result from replica 1 [my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72-2sjg7]: "my-hostname-basic-c20e0c1d-0d6c-4f50-8e00-40b2e35d5b72-2sjg7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:58:28.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2167" for this suite. • [SLOW TEST:21.319 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":279,"completed":59,"skipped":1220,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:58:28.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:58:28.858: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 25 09:58:32.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4752 create -f -' Jan 25 09:58:35.313: INFO: stderr: "" Jan 25 09:58:35.314: INFO: stdout: "e2e-test-crd-publish-openapi-6657-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 25 09:58:35.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4752 delete e2e-test-crd-publish-openapi-6657-crds test-cr' Jan 25 09:58:35.478: INFO: stderr: "" Jan 25 09:58:35.478: INFO: stdout: "e2e-test-crd-publish-openapi-6657-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 25 09:58:35.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4752 apply -f -' Jan 25 09:58:35.882: INFO: stderr: "" Jan 25 09:58:35.882: INFO: stdout: "e2e-test-crd-publish-openapi-6657-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 25 09:58:35.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4752 delete e2e-test-crd-publish-openapi-6657-crds test-cr' Jan 25 09:58:36.022: INFO: stderr: "" Jan 25 09:58:36.022: INFO: stdout: "e2e-test-crd-publish-openapi-6657-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 25 09:58:36.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6657-crds' Jan 25 09:58:36.406: INFO: stderr: "" Jan 25 09:58:36.406: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6657-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:58:40.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4752" for this suite. • [SLOW TEST:11.387 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":279,"completed":60,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:58:40.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Jan 25 09:58:40.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2574' Jan 25 09:58:40.624: INFO: stderr: "" Jan 25 09:58:40.624: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 25 09:58:40.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2574' Jan 25 09:58:40.721: INFO: stderr: "" Jan 25 09:58:40.721: INFO: stdout: "update-demo-nautilus-gg2mh update-demo-nautilus-jxbnn " Jan 25 09:58:40.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2mh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:40.892: INFO: stderr: "" Jan 25 09:58:40.893: INFO: stdout: "" Jan 25 09:58:40.893: INFO: update-demo-nautilus-gg2mh is created but not running Jan 25 09:58:45.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2574' Jan 25 09:58:47.271: INFO: stderr: "" Jan 25 09:58:47.271: INFO: stdout: "update-demo-nautilus-gg2mh update-demo-nautilus-jxbnn " Jan 25 09:58:47.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2mh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:47.873: INFO: stderr: "" Jan 25 09:58:47.874: INFO: stdout: "" Jan 25 09:58:47.874: INFO: update-demo-nautilus-gg2mh is created but not running Jan 25 09:58:52.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2574' Jan 25 09:58:53.073: INFO: stderr: "" Jan 25 09:58:53.073: INFO: stdout: "update-demo-nautilus-gg2mh update-demo-nautilus-jxbnn " Jan 25 09:58:53.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2mh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:53.169: INFO: stderr: "" Jan 25 09:58:53.169: INFO: stdout: "true" Jan 25 09:58:53.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2mh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:53.268: INFO: stderr: "" Jan 25 09:58:53.269: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 09:58:53.269: INFO: validating pod update-demo-nautilus-gg2mh Jan 25 09:58:53.281: INFO: got data: { "image": "nautilus.jpg" } Jan 25 09:58:53.281: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 09:58:53.281: INFO: update-demo-nautilus-gg2mh is verified up and running Jan 25 09:58:53.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxbnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:53.364: INFO: stderr: "" Jan 25 09:58:53.364: INFO: stdout: "true" Jan 25 09:58:53.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxbnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2574' Jan 25 09:58:53.515: INFO: stderr: "" Jan 25 09:58:53.515: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 25 09:58:53.515: INFO: validating pod update-demo-nautilus-jxbnn Jan 25 09:58:53.525: INFO: got data: { "image": "nautilus.jpg" } Jan 25 09:58:53.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 25 09:58:53.525: INFO: update-demo-nautilus-jxbnn is verified up and running STEP: using delete to clean up resources Jan 25 09:58:53.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2574' Jan 25 09:58:53.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 25 09:58:53.729: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 25 09:58:53.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2574' Jan 25 09:58:53.843: INFO: stderr: "No resources found in kubectl-2574 namespace.\n" Jan 25 09:58:53.843: INFO: stdout: "" Jan 25 09:58:53.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2574 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 25 09:58:54.112: INFO: stderr: "" Jan 25 09:58:54.112: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:58:54.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2574" for this suite. • [SLOW TEST:14.049 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":279,"completed":61,"skipped":1278,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:58:54.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-5449/secret-test-a9429531-931f-462d-9fd7-973da6cf20f6 STEP: Creating a pod to test consume secrets Jan 25 09:58:55.489: INFO: Waiting up to 5m0s for pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b" in namespace "secrets-5449" to be "success or failure" Jan 25 09:58:55.568: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 78.885548ms Jan 25 09:58:57.815: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325802742s Jan 25 09:58:59.860: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370283066s Jan 25 09:59:01.893: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.403365478s Jan 25 09:59:03.904: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41494354s Jan 25 09:59:05.912: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.422492322s Jan 25 09:59:07.921: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.431131388s STEP: Saw pod success Jan 25 09:59:07.921: INFO: Pod "pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b" satisfied condition "success or failure" Jan 25 09:59:07.926: INFO: Trying to get logs from node jerma-node pod pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b container env-test: STEP: delete the pod Jan 25 09:59:07.987: INFO: Waiting for pod pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b to disappear Jan 25 09:59:07.999: INFO: Pod pod-configmaps-238c1382-650e-48fe-81be-6f062c40173b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:59:07.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5449" for this suite. • [SLOW TEST:13.870 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":279,"completed":62,"skipped":1284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:59:08.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 25 09:59:18.658: INFO: Successfully updated pod "adopt-release-48556" STEP: Checking that the Job readopts the Pod Jan 25 09:59:18.658: INFO: Waiting up to 15m0s for pod "adopt-release-48556" in namespace "job-4977" to be "adopted" Jan 25 09:59:18.689: INFO: Pod "adopt-release-48556": Phase="Running", Reason="", readiness=true. Elapsed: 30.607942ms Jan 25 09:59:20.697: INFO: Pod "adopt-release-48556": Phase="Running", Reason="", readiness=true. Elapsed: 2.038682354s Jan 25 09:59:20.697: INFO: Pod "adopt-release-48556" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 25 09:59:21.228: INFO: Successfully updated pod "adopt-release-48556" STEP: Checking that the Job releases the Pod Jan 25 09:59:21.228: INFO: Waiting up to 15m0s for pod "adopt-release-48556" in namespace "job-4977" to be "released" Jan 25 09:59:21.275: INFO: Pod "adopt-release-48556": Phase="Running", Reason="", readiness=true. Elapsed: 46.790141ms Jan 25 09:59:23.310: INFO: Pod "adopt-release-48556": Phase="Running", Reason="", readiness=true. Elapsed: 2.081587802s Jan 25 09:59:23.310: INFO: Pod "adopt-release-48556" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:59:23.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4977" for this suite. • [SLOW TEST:15.308 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":279,"completed":63,"skipped":1325,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:59:23.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 25 09:59:36.033: INFO: Successfully updated pod "pod-update-activedeadlineseconds-aeff6d95-9eb2-49ae-9107-216ebcf0fbcc" Jan 25 09:59:36.033: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-aeff6d95-9eb2-49ae-9107-216ebcf0fbcc" in namespace "pods-2391" to be "terminated due to deadline exceeded" Jan 25 09:59:36.093: INFO: Pod "pod-update-activedeadlineseconds-aeff6d95-9eb2-49ae-9107-216ebcf0fbcc": Phase="Running", Reason="", readiness=true. Elapsed: 60.381012ms Jan 25 09:59:38.099: INFO: Pod "pod-update-activedeadlineseconds-aeff6d95-9eb2-49ae-9107-216ebcf0fbcc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.066166339s Jan 25 09:59:38.099: INFO: Pod "pod-update-activedeadlineseconds-aeff6d95-9eb2-49ae-9107-216ebcf0fbcc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:59:38.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2391" for this suite. • [SLOW TEST:14.785 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":279,"completed":64,"skipped":1333,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:59:38.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 09:59:38.974: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 09:59:40.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543178, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:59:43.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543178, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 09:59:45.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543179, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543178, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 09:59:48.018: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 09:59:48.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:59:49.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3575" for this suite. STEP: Destroying namespace "webhook-3575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.498 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":279,"completed":65,"skipped":1340,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:59:49.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 25 09:59:49.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216745 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:59:49.746: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216746 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:59:49.746: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216748 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 25 09:59:59.912: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216791 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:59:59.913: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216792 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 25 09:59:59.913: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4042 /api/v1/namespaces/watch-4042/configmaps/e2e-watch-test-label-changed 40d2fb49-3ba9-43e0-bf64-2c98340e0929 4216793 0 2020-01-25 09:59:49 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 09:59:59.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4042" for this suite. • [SLOW TEST:10.318 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":279,"completed":66,"skipped":1353,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 09:59:59.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 10:00:00.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 25 10:00:00.337: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:00Z generation:1 name:name1 resourceVersion:4216806 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cbd7d645-551e-4782-9b86-b41a15840e41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 25 10:00:10.347: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:10Z generation:1 name:name2 resourceVersion:4216844 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0894bc52-ec14-4baf-8fdc-73551a26a600] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 25 10:00:20.361: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:00Z generation:2 name:name1 resourceVersion:4216868 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cbd7d645-551e-4782-9b86-b41a15840e41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 25 10:00:30.373: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:10Z generation:2 name:name2 resourceVersion:4216890 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0894bc52-ec14-4baf-8fdc-73551a26a600] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 25 10:00:40.441: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:00Z generation:2 name:name1 resourceVersion:4216912 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cbd7d645-551e-4782-9b86-b41a15840e41] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 25 10:00:50.470: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T10:00:10Z generation:2 name:name2 resourceVersion:4216936 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:0894bc52-ec14-4baf-8fdc-73551a26a600] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:01:01.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7194" for this suite. • [SLOW TEST:61.094 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":279,"completed":67,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:01:01.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 25 10:01:01.203: INFO: Waiting up to 5m0s for pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e" in namespace "emptydir-8236" to be "success or failure" Jan 25 10:01:01.231: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.491439ms Jan 25 10:01:03.240: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036395812s Jan 25 10:01:05.251: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047908898s Jan 25 10:01:07.262: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058430431s Jan 25 10:01:09.274: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070530228s STEP: Saw pod success Jan 25 10:01:09.274: INFO: Pod "pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e" satisfied condition "success or failure" Jan 25 10:01:09.279: INFO: Trying to get logs from node jerma-node pod pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e container test-container: STEP: delete the pod Jan 25 10:01:09.544: INFO: Waiting for pod pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e to disappear Jan 25 10:01:09.565: INFO: Pod pod-71bb0cb0-4c16-41e6-a3cb-a8ff8e9c194e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:01:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8236" for this suite. • [SLOW TEST:8.559 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":68,"skipped":1392,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:01:09.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-fb590896-8d69-49a1-b694-c19542eb9584 STEP: Creating a pod to test consume configMaps Jan 25 10:01:09.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3" in namespace "projected-546" to be "success or failure" Jan 25 10:01:09.924: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.038583ms Jan 25 10:01:11.933: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016141211s Jan 25 10:01:13.944: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02733207s Jan 25 10:01:15.954: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036694447s Jan 25 10:01:17.962: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045290441s STEP: Saw pod success Jan 25 10:01:17.963: INFO: Pod "pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3" satisfied condition "success or failure" Jan 25 10:01:17.966: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3 container projected-configmap-volume-test: STEP: delete the pod Jan 25 10:01:18.111: INFO: Waiting for pod pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3 to disappear Jan 25 10:01:18.191: INFO: Pod pod-projected-configmaps-6ab2a49a-7624-4ed6-87dc-0f8fb4189de3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:01:18.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-546" for this suite. • [SLOW TEST:8.625 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":279,"completed":69,"skipped":1406,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:01:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 25 10:01:18.399: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:01:41.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7862" for this suite. • [SLOW TEST:23.034 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":279,"completed":70,"skipped":1428,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:01:41.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 25 10:01:41.423: INFO: >>> kubeConfig: /root/.kube/config Jan 25 10:01:44.945: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:02:00.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-890" for this suite. • [SLOW TEST:19.746 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":279,"completed":71,"skipped":1433,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:02:00.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6095 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 10:02:01.058: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 25 10:02:01.189: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 10:02:03.372: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 10:02:05.194: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 10:02:07.594: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 10:02:09.235: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 25 10:02:11.200: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:13.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:15.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:17.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:19.197: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:21.196: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 25 10:02:23.195: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 25 10:02:23.206: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 25 10:02:31.392: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-6095 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 10:02:31.392: INFO: >>> kubeConfig: /root/.kube/config I0125 10:02:31.452355 9 log.go:172] (0xc0050a0160) (0xc002c3d4a0) Create stream I0125 10:02:31.452486 9 log.go:172] (0xc0050a0160) (0xc002c3d4a0) Stream added, broadcasting: 1 I0125 10:02:31.459712 9 log.go:172] (0xc0050a0160) Reply frame received for 1 I0125 10:02:31.459781 9 log.go:172] (0xc0050a0160) (0xc0022b19a0) Create stream I0125 10:02:31.459796 9 log.go:172] (0xc0050a0160) (0xc0022b19a0) Stream added, broadcasting: 3 I0125 10:02:31.461395 9 log.go:172] (0xc0050a0160) Reply frame received for 3 I0125 10:02:31.461424 9 log.go:172] (0xc0050a0160) (0xc002c3d540) Create stream I0125 10:02:31.461445 9 log.go:172] (0xc0050a0160) (0xc002c3d540) Stream added, broadcasting: 5 I0125 10:02:31.463937 9 log.go:172] (0xc0050a0160) Reply frame received for 5 I0125 10:02:31.568919 9 log.go:172] (0xc0050a0160) Data frame received for 3 I0125 10:02:31.569006 9 log.go:172] (0xc0022b19a0) (3) Data frame handling I0125 10:02:31.569033 9 log.go:172] (0xc0022b19a0) (3) Data frame sent I0125 10:02:31.643118 9 log.go:172] (0xc0050a0160) Data frame received for 1 I0125 10:02:31.643374 9 log.go:172] (0xc002c3d4a0) (1) Data frame handling I0125 10:02:31.643475 9 log.go:172] (0xc002c3d4a0) (1) Data frame sent I0125 10:02:31.643858 9 log.go:172] (0xc0050a0160) (0xc002c3d4a0) Stream removed, broadcasting: 1 I0125 10:02:31.644062 9 log.go:172] (0xc0050a0160) (0xc002c3d540) Stream removed, broadcasting: 5 I0125 10:02:31.644170 9 log.go:172] (0xc0050a0160) (0xc0022b19a0) Stream removed, broadcasting: 3 I0125 10:02:31.644353 9 log.go:172] (0xc0050a0160) (0xc002c3d4a0) Stream removed, broadcasting: 1 I0125 10:02:31.644382 9 log.go:172] (0xc0050a0160) (0xc0022b19a0) Stream removed, broadcasting: 3 I0125 10:02:31.644393 9 log.go:172] (0xc0050a0160) (0xc002c3d540) Stream removed, broadcasting: 5 I0125 10:02:31.644518 9 log.go:172] (0xc0050a0160) Go away received Jan 25 10:02:31.645: INFO: Waiting for responses: map[] Jan 25 10:02:31.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-6095 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 10:02:31.651: INFO: >>> kubeConfig: /root/.kube/config I0125 10:02:31.699458 9 log.go:172] (0xc004ef4580) (0xc0022b1e00) Create stream I0125 10:02:31.699491 9 log.go:172] (0xc004ef4580) (0xc0022b1e00) Stream added, broadcasting: 1 I0125 10:02:31.711406 9 log.go:172] (0xc004ef4580) Reply frame received for 1 I0125 10:02:31.711452 9 log.go:172] (0xc004ef4580) (0xc002c3d5e0) Create stream I0125 10:02:31.711489 9 log.go:172] (0xc004ef4580) (0xc002c3d5e0) Stream added, broadcasting: 3 I0125 10:02:31.719711 9 log.go:172] (0xc004ef4580) Reply frame received for 3 I0125 10:02:31.719742 9 log.go:172] (0xc004ef4580) (0xc001ff2dc0) Create stream I0125 10:02:31.719754 9 log.go:172] (0xc004ef4580) (0xc001ff2dc0) Stream added, broadcasting: 5 I0125 10:02:31.721514 9 log.go:172] (0xc004ef4580) Reply frame received for 5 I0125 10:02:31.829135 9 log.go:172] (0xc004ef4580) Data frame received for 3 I0125 10:02:31.829245 9 log.go:172] (0xc002c3d5e0) (3) Data frame handling I0125 10:02:31.829291 9 log.go:172] (0xc002c3d5e0) (3) Data frame sent I0125 10:02:31.904995 9 log.go:172] (0xc004ef4580) Data frame received for 1 I0125 10:02:31.905138 9 log.go:172] (0xc004ef4580) (0xc001ff2dc0) Stream removed, broadcasting: 5 I0125 10:02:31.905210 9 log.go:172] (0xc0022b1e00) (1) Data frame handling I0125 10:02:31.905249 9 log.go:172] (0xc0022b1e00) (1) Data frame sent I0125 10:02:31.905279 9 log.go:172] (0xc004ef4580) (0xc002c3d5e0) Stream removed, broadcasting: 3 I0125 10:02:31.905352 9 log.go:172] (0xc004ef4580) (0xc0022b1e00) Stream removed, broadcasting: 1 I0125 10:02:31.905374 9 log.go:172] (0xc004ef4580) Go away received I0125 10:02:31.906113 9 log.go:172] (0xc004ef4580) (0xc0022b1e00) Stream removed, broadcasting: 1 I0125 10:02:31.906160 9 log.go:172] (0xc004ef4580) (0xc002c3d5e0) Stream removed, broadcasting: 3 I0125 10:02:31.906177 9 log.go:172] (0xc004ef4580) (0xc001ff2dc0) Stream removed, broadcasting: 5 Jan 25 10:02:31.906: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:02:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6095" for this suite. • [SLOW TEST:30.944 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":279,"completed":72,"skipped":1447,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:02:31.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 10:02:31.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6181' Jan 25 10:02:32.122: INFO: stderr: "" Jan 25 10:02:32.122: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 25 10:02:42.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6181 -o json' Jan 25 10:02:42.340: INFO: stderr: "" Jan 25 10:02:42.341: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-25T10:02:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6181\",\n \"resourceVersion\": \"4217342\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6181/pods/e2e-test-httpd-pod\",\n \"uid\": \"015c3e74-92a6-43fd-8f92-1f711112b867\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-lxxw9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-lxxw9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-lxxw9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T10:02:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T10:02:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T10:02:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T10:02:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://50dba007879c683443621be85790a6b2b2c62d49b9934c8f4b05b56ebf80f3ea\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-25T10:02:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.3\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.3\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-25T10:02:32Z\"\n }\n}\n" STEP: replace the image in the pod Jan 25 10:02:42.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6181' Jan 25 10:02:42.809: INFO: stderr: "" Jan 25 10:02:42.809: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904 Jan 25 10:02:42.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6181' Jan 25 10:02:50.268: INFO: stderr: "" Jan 25 10:02:50.268: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:02:50.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6181" for this suite. • [SLOW TEST:18.343 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":279,"completed":73,"skipped":1462,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:02:50.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 10:02:50.354: INFO: Creating deployment "test-recreate-deployment" Jan 25 10:02:50.378: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 25 10:02:50.427: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 25 10:02:52.497: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 25 10:02:52.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 10:02:54.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 10:02:56.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543370, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 10:02:58.515: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 25 10:02:58.535: INFO: Updating deployment test-recreate-deployment Jan 25 10:02:58.536: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jan 25 10:02:59.002: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6394 /apis/apps/v1/namespaces/deployment-6394/deployments/test-recreate-deployment 3650ce4c-e03b-4b83-9eff-60885eebf7c9 4217467 2 2020-01-25 10:02:50 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004b7cc48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 10:02:58 +0000 UTC,LastTransitionTime:2020-01-25 10:02:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-25 10:02:58 +0000 UTC,LastTransitionTime:2020-01-25 10:02:50 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 25 10:02:59.007: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6394 /apis/apps/v1/namespaces/deployment-6394/replicasets/test-recreate-deployment-5f94c574ff 3dd5b259-4bdc-436c-a626-069dbad87ed2 4217466 1 2020-01-25 10:02:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3650ce4c-e03b-4b83-9eff-60885eebf7c9 0xc00323cc17 0xc00323cc18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00323cd38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 10:02:59.007: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 25 10:02:59.007: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-6394 /apis/apps/v1/namespaces/deployment-6394/replicasets/test-recreate-deployment-799c574856 b2c7057a-0d20-44c1-a992-574eb9ca4621 4217454 2 2020-01-25 10:02:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3650ce4c-e03b-4b83-9eff-60885eebf7c9 0xc00323ce67 0xc00323ce68}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00323cf78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 10:02:59.112: INFO: Pod "test-recreate-deployment-5f94c574ff-jcpng" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-jcpng test-recreate-deployment-5f94c574ff- deployment-6394 /api/v1/namespaces/deployment-6394/pods/test-recreate-deployment-5f94c574ff-jcpng 4f0ef615-4d63-4173-9680-85232ae9ff6b 4217462 0 2020-01-25 10:02:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 3dd5b259-4bdc-436c-a626-069dbad87ed2 0xc00323d707 0xc00323d708}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jzjgx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jzjgx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jzjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:02:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:02:59.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6394" for this suite. • [SLOW TEST:8.859 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":279,"completed":74,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:02:59.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-3512c7dc-66ac-46fe-bca1-c1281f66344c STEP: Creating a pod to test consume configMaps Jan 25 10:02:59.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1" in namespace "configmap-9589" to be "success or failure" Jan 25 10:02:59.534: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 159.855684ms Jan 25 10:03:01.542: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168489456s Jan 25 10:03:03.550: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176593021s Jan 25 10:03:05.559: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.185289197s Jan 25 10:03:07.566: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191661563s Jan 25 10:03:09.572: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.197935039s Jan 25 10:03:11.601: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.226935907s STEP: Saw pod success Jan 25 10:03:11.601: INFO: Pod "pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1" satisfied condition "success or failure" Jan 25 10:03:11.613: INFO: Trying to get logs from node jerma-node pod pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1 container configmap-volume-test: STEP: delete the pod Jan 25 10:03:11.712: INFO: Waiting for pod pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1 to disappear Jan 25 10:03:11.734: INFO: Pod pod-configmaps-84efcf29-a9bd-4c67-9e20-2c22a5c80dd1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:03:11.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9589" for this suite. • [SLOW TEST:12.608 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":279,"completed":75,"skipped":1490,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:03:11.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 10:03:13.279: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78" in namespace "security-context-test-2472" to be "success or failure" Jan 25 10:03:13.368: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78": Phase="Pending", Reason="", readiness=false. Elapsed: 88.28867ms Jan 25 10:03:15.427: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14747495s Jan 25 10:03:17.479: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199877972s Jan 25 10:03:19.489: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209181954s Jan 25 10:03:21.545: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.265563959s Jan 25 10:03:21.545: INFO: Pod "busybox-user-65534-ca17e6e7-8254-4ac5-9476-c8b73f040c78" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:03:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2472" for this suite. • [SLOW TEST:9.816 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":76,"skipped":1493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:03:21.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 25 10:03:21.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 10:03:34.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6165" for this suite. • [SLOW TEST:13.374 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":279,"completed":77,"skipped":1534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 10:03:34.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 10:03:35.132: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
apt/
... (200; 12.852423ms)
Jan 25 10:03:35.138: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.778552ms)
Jan 25 10:03:35.141: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.449849ms)
Jan 25 10:03:35.147: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.36782ms)
Jan 25 10:03:35.157: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.366474ms)
Jan 25 10:03:35.161: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.992779ms)
Jan 25 10:03:35.164: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.891133ms)
Jan 25 10:03:35.168: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.348372ms)
Jan 25 10:03:35.170: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.513279ms)
Jan 25 10:03:35.177: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.344079ms)
Jan 25 10:03:35.183: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.307503ms)
Jan 25 10:03:35.192: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 9.321464ms)
Jan 25 10:03:35.197: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.454088ms)
Jan 25 10:03:35.201: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.193709ms)
Jan 25 10:03:35.207: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.208158ms)
Jan 25 10:03:35.249: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 41.853441ms)
Jan 25 10:03:35.255: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.384859ms)
Jan 25 10:03:35.261: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.743746ms)
Jan 25 10:03:35.266: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.112175ms)
Jan 25 10:03:35.271: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.360913ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:03:35.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6969" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":279,"completed":78,"skipped":1562,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:03:35.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:04:10.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-844" for this suite.
STEP: Destroying namespace "nsdeletetest-8055" for this suite.
Jan 25 10:04:10.837: INFO: Namespace nsdeletetest-8055 was already deleted
STEP: Destroying namespace "nsdeletetest-4418" for this suite.

• [SLOW TEST:35.564 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":279,"completed":79,"skipped":1577,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:04:10.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:04:10.987: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 25 10:04:13.087: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:04:14.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1108" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":279,"completed":80,"skipped":1613,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:04:14.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-3282
STEP: creating replication controller nodeport-test in namespace services-3282
I0125 10:04:15.144219       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3282, replica count: 2
I0125 10:04:18.196304       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:04:21.197284       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:04:24.198334       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:04:27.198945       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:04:30.199471       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:04:33.200150       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 10:04:33.200: INFO: Creating new exec pod
Jan 25 10:04:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3282 execpodzk2kb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 25 10:04:42.633: INFO: stderr: "I0125 10:04:42.391425    1564 log.go:172] (0xc0009a8210) (0xc00099c000) Create stream\nI0125 10:04:42.391553    1564 log.go:172] (0xc0009a8210) (0xc00099c000) Stream added, broadcasting: 1\nI0125 10:04:42.396491    1564 log.go:172] (0xc0009a8210) Reply frame received for 1\nI0125 10:04:42.396541    1564 log.go:172] (0xc0009a8210) (0xc000629ea0) Create stream\nI0125 10:04:42.396554    1564 log.go:172] (0xc0009a8210) (0xc000629ea0) Stream added, broadcasting: 3\nI0125 10:04:42.397568    1564 log.go:172] (0xc0009a8210) Reply frame received for 3\nI0125 10:04:42.397600    1564 log.go:172] (0xc0009a8210) (0xc000629f40) Create stream\nI0125 10:04:42.397614    1564 log.go:172] (0xc0009a8210) (0xc000629f40) Stream added, broadcasting: 5\nI0125 10:04:42.399539    1564 log.go:172] (0xc0009a8210) Reply frame received for 5\nI0125 10:04:42.478939    1564 log.go:172] (0xc0009a8210) Data frame received for 5\nI0125 10:04:42.479055    1564 log.go:172] (0xc000629f40) (5) Data frame handling\nI0125 10:04:42.479123    1564 log.go:172] (0xc000629f40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0125 10:04:42.489812    1564 log.go:172] (0xc0009a8210) Data frame received for 5\nI0125 10:04:42.489850    1564 log.go:172] (0xc000629f40) (5) Data frame handling\nI0125 10:04:42.489871    1564 log.go:172] (0xc000629f40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0125 10:04:42.621320    1564 log.go:172] (0xc0009a8210) Data frame received for 1\nI0125 10:04:42.621383    1564 log.go:172] (0xc00099c000) (1) Data frame handling\nI0125 10:04:42.621412    1564 log.go:172] (0xc00099c000) (1) Data frame sent\nI0125 10:04:42.621427    1564 log.go:172] (0xc0009a8210) (0xc00099c000) Stream removed, broadcasting: 1\nI0125 10:04:42.621853    1564 log.go:172] (0xc0009a8210) (0xc000629ea0) Stream removed, broadcasting: 3\nI0125 10:04:42.622027    1564 log.go:172] (0xc0009a8210) (0xc000629f40) Stream removed, broadcasting: 5\nI0125 10:04:42.622056    1564 log.go:172] (0xc0009a8210) (0xc00099c000) Stream removed, broadcasting: 1\nI0125 10:04:42.622064    1564 log.go:172] (0xc0009a8210) (0xc000629ea0) Stream removed, broadcasting: 3\nI0125 10:04:42.622070    1564 log.go:172] (0xc0009a8210) (0xc000629f40) Stream removed, broadcasting: 5\n"
Jan 25 10:04:42.633: INFO: stdout: ""
Jan 25 10:04:42.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3282 execpodzk2kb -- /bin/sh -x -c nc -zv -t -w 2 10.96.13.27 80'
Jan 25 10:04:42.955: INFO: stderr: "I0125 10:04:42.767982    1580 log.go:172] (0xc00099a580) (0xc0005fbea0) Create stream\nI0125 10:04:42.768353    1580 log.go:172] (0xc00099a580) (0xc0005fbea0) Stream added, broadcasting: 1\nI0125 10:04:42.775542    1580 log.go:172] (0xc00099a580) Reply frame received for 1\nI0125 10:04:42.775590    1580 log.go:172] (0xc00099a580) (0xc0005fbf40) Create stream\nI0125 10:04:42.775607    1580 log.go:172] (0xc00099a580) (0xc0005fbf40) Stream added, broadcasting: 3\nI0125 10:04:42.777056    1580 log.go:172] (0xc00099a580) Reply frame received for 3\nI0125 10:04:42.777099    1580 log.go:172] (0xc00099a580) (0xc0007534a0) Create stream\nI0125 10:04:42.777120    1580 log.go:172] (0xc00099a580) (0xc0007534a0) Stream added, broadcasting: 5\nI0125 10:04:42.780052    1580 log.go:172] (0xc00099a580) Reply frame received for 5\nI0125 10:04:42.855923    1580 log.go:172] (0xc00099a580) Data frame received for 5\nI0125 10:04:42.856000    1580 log.go:172] (0xc0007534a0) (5) Data frame handling\nI0125 10:04:42.856059    1580 log.go:172] (0xc0007534a0) (5) Data frame sent\n+ I0125 10:04:42.856117    1580 log.go:172] (0xc00099a580) Data frame received for 5\nI0125 10:04:42.856151    1580 log.go:172] (0xc0007534a0) (5) Data frame handling\nI0125 10:04:42.856180    1580 log.go:172] (0xc0007534a0) (5) Data frame sent\nI0125 10:04:42.856198    1580 log.go:172] (0xc00099a580) Data frame received for 5\nnc -zv -t -w 2 10.96.13.27I0125 10:04:42.856209    1580 log.go:172] (0xc0007534a0) (5) Data frame handling\nI0125 10:04:42.856297    1580 log.go:172] (0xc0007534a0) (5) Data frame sent\n 80\nI0125 10:04:42.866159    1580 log.go:172] (0xc00099a580) Data frame received for 5\nI0125 10:04:42.866244    1580 log.go:172] (0xc0007534a0) (5) Data frame handling\nI0125 10:04:42.866273    1580 log.go:172] (0xc0007534a0) (5) Data frame sent\nConnection to 10.96.13.27 80 port [tcp/http] succeeded!\nI0125 10:04:42.945364    1580 log.go:172] (0xc00099a580) Data frame received for 1\nI0125 10:04:42.945503    1580 log.go:172] (0xc0005fbea0) (1) Data frame handling\nI0125 10:04:42.945537    1580 log.go:172] (0xc0005fbea0) (1) Data frame sent\nI0125 10:04:42.948123    1580 log.go:172] (0xc00099a580) (0xc0007534a0) Stream removed, broadcasting: 5\nI0125 10:04:42.948243    1580 log.go:172] (0xc00099a580) (0xc0005fbea0) Stream removed, broadcasting: 1\nI0125 10:04:42.949012    1580 log.go:172] (0xc00099a580) (0xc0005fbf40) Stream removed, broadcasting: 3\nI0125 10:04:42.949296    1580 log.go:172] (0xc00099a580) Go away received\nI0125 10:04:42.949385    1580 log.go:172] (0xc00099a580) (0xc0005fbea0) Stream removed, broadcasting: 1\nI0125 10:04:42.949425    1580 log.go:172] (0xc00099a580) (0xc0005fbf40) Stream removed, broadcasting: 3\nI0125 10:04:42.949441    1580 log.go:172] (0xc00099a580) (0xc0007534a0) Stream removed, broadcasting: 5\n"
Jan 25 10:04:42.956: INFO: stdout: ""
Jan 25 10:04:42.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3282 execpodzk2kb -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31241'
Jan 25 10:04:43.286: INFO: stderr: "I0125 10:04:43.115557    1601 log.go:172] (0xc000b4cfd0) (0xc000795040) Create stream\nI0125 10:04:43.115638    1601 log.go:172] (0xc000b4cfd0) (0xc000795040) Stream added, broadcasting: 1\nI0125 10:04:43.131100    1601 log.go:172] (0xc000b4cfd0) Reply frame received for 1\nI0125 10:04:43.131184    1601 log.go:172] (0xc000b4cfd0) (0xc0007950e0) Create stream\nI0125 10:04:43.131195    1601 log.go:172] (0xc000b4cfd0) (0xc0007950e0) Stream added, broadcasting: 3\nI0125 10:04:43.132590    1601 log.go:172] (0xc000b4cfd0) Reply frame received for 3\nI0125 10:04:43.132616    1601 log.go:172] (0xc000b4cfd0) (0xc000956b40) Create stream\nI0125 10:04:43.132670    1601 log.go:172] (0xc000b4cfd0) (0xc000956b40) Stream added, broadcasting: 5\nI0125 10:04:43.133905    1601 log.go:172] (0xc000b4cfd0) Reply frame received for 5\nI0125 10:04:43.196826    1601 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0125 10:04:43.196879    1601 log.go:172] (0xc000956b40) (5) Data frame handling\nI0125 10:04:43.196902    1601 log.go:172] (0xc000956b40) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31241\nI0125 10:04:43.198608    1601 log.go:172] (0xc000b4cfd0) Data frame received for 5\nI0125 10:04:43.198634    1601 log.go:172] (0xc000956b40) (5) Data frame handling\nI0125 10:04:43.198656    1601 log.go:172] (0xc000956b40) (5) Data frame sent\nConnection to 10.96.2.250 31241 port [tcp/31241] succeeded!\nI0125 10:04:43.276155    1601 log.go:172] (0xc000b4cfd0) Data frame received for 1\nI0125 10:04:43.276283    1601 log.go:172] (0xc000b4cfd0) (0xc0007950e0) Stream removed, broadcasting: 3\nI0125 10:04:43.276319    1601 log.go:172] (0xc000795040) (1) Data frame handling\nI0125 10:04:43.276332    1601 log.go:172] (0xc000795040) (1) Data frame sent\nI0125 10:04:43.276341    1601 log.go:172] (0xc000b4cfd0) (0xc000956b40) Stream removed, broadcasting: 5\nI0125 10:04:43.276364    1601 log.go:172] (0xc000b4cfd0) (0xc000795040) Stream removed, broadcasting: 1\nI0125 10:04:43.276377    1601 log.go:172] (0xc000b4cfd0) Go away received\nI0125 10:04:43.276658    1601 log.go:172] (0xc000b4cfd0) (0xc000795040) Stream removed, broadcasting: 1\nI0125 10:04:43.276701    1601 log.go:172] (0xc000b4cfd0) (0xc0007950e0) Stream removed, broadcasting: 3\nI0125 10:04:43.276711    1601 log.go:172] (0xc000b4cfd0) (0xc000956b40) Stream removed, broadcasting: 5\n"
Jan 25 10:04:43.287: INFO: stdout: ""
Jan 25 10:04:43.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3282 execpodzk2kb -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31241'
Jan 25 10:04:43.572: INFO: stderr: "I0125 10:04:43.421913    1621 log.go:172] (0xc000a8ee70) (0xc000a6c1e0) Create stream\nI0125 10:04:43.422000    1621 log.go:172] (0xc000a8ee70) (0xc000a6c1e0) Stream added, broadcasting: 1\nI0125 10:04:43.424872    1621 log.go:172] (0xc000a8ee70) Reply frame received for 1\nI0125 10:04:43.424900    1621 log.go:172] (0xc000a8ee70) (0xc000a6c280) Create stream\nI0125 10:04:43.424906    1621 log.go:172] (0xc000a8ee70) (0xc000a6c280) Stream added, broadcasting: 3\nI0125 10:04:43.426007    1621 log.go:172] (0xc000a8ee70) Reply frame received for 3\nI0125 10:04:43.426032    1621 log.go:172] (0xc000a8ee70) (0xc000a6e1e0) Create stream\nI0125 10:04:43.426040    1621 log.go:172] (0xc000a8ee70) (0xc000a6e1e0) Stream added, broadcasting: 5\nI0125 10:04:43.427076    1621 log.go:172] (0xc000a8ee70) Reply frame received for 5\nI0125 10:04:43.488203    1621 log.go:172] (0xc000a8ee70) Data frame received for 5\nI0125 10:04:43.488316    1621 log.go:172] (0xc000a6e1e0) (5) Data frame handling\nI0125 10:04:43.488351    1621 log.go:172] (0xc000a6e1e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31241\nI0125 10:04:43.495758    1621 log.go:172] (0xc000a8ee70) Data frame received for 5\nI0125 10:04:43.495785    1621 log.go:172] (0xc000a6e1e0) (5) Data frame handling\nI0125 10:04:43.495795    1621 log.go:172] (0xc000a6e1e0) (5) Data frame sent\nConnection to 10.96.1.234 31241 port [tcp/31241] succeeded!\nI0125 10:04:43.563138    1621 log.go:172] (0xc000a8ee70) Data frame received for 1\nI0125 10:04:43.563236    1621 log.go:172] (0xc000a8ee70) (0xc000a6e1e0) Stream removed, broadcasting: 5\nI0125 10:04:43.563317    1621 log.go:172] (0xc000a6c1e0) (1) Data frame handling\nI0125 10:04:43.563357    1621 log.go:172] (0xc000a6c1e0) (1) Data frame sent\nI0125 10:04:43.563375    1621 log.go:172] (0xc000a8ee70) (0xc000a6c280) Stream removed, broadcasting: 3\nI0125 10:04:43.563457    1621 log.go:172] (0xc000a8ee70) (0xc000a6c1e0) Stream removed, broadcasting: 1\nI0125 10:04:43.563481    1621 log.go:172] (0xc000a8ee70) Go away received\nI0125 10:04:43.564167    1621 log.go:172] (0xc000a8ee70) (0xc000a6c1e0) Stream removed, broadcasting: 1\nI0125 10:04:43.564187    1621 log.go:172] (0xc000a8ee70) (0xc000a6c280) Stream removed, broadcasting: 3\nI0125 10:04:43.564193    1621 log.go:172] (0xc000a8ee70) (0xc000a6e1e0) Stream removed, broadcasting: 5\n"
Jan 25 10:04:43.572: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:04:43.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3282" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:29.051 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":279,"completed":81,"skipped":1623,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:04:43.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:04:44.578: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:04:46.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:04:48.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:04:50.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:04:53.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:04:54.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715543484, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:04:57.715: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:04:57.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5690" for this suite.
STEP: Destroying namespace "webhook-5690-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.474 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":279,"completed":82,"skipped":1641,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:04:58.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:05:08.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8518" for this suite.

• [SLOW TEST:10.161 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":279,"completed":83,"skipped":1728,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:05:08.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Jan 25 10:05:08.377: INFO: Waiting up to 5m0s for pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb" in namespace "var-expansion-1985" to be "success or failure"
Jan 25 10:05:08.403: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Pending", Reason="", readiness=false. Elapsed: 26.090914ms
Jan 25 10:05:10.410: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033768565s
Jan 25 10:05:12.419: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042085233s
Jan 25 10:05:14.424: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047417561s
Jan 25 10:05:16.430: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053758509s
Jan 25 10:05:18.440: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063632961s
STEP: Saw pod success
Jan 25 10:05:18.441: INFO: Pod "var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb" satisfied condition "success or failure"
Jan 25 10:05:18.446: INFO: Trying to get logs from node jerma-node pod var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb container dapi-container: 
STEP: delete the pod
Jan 25 10:05:18.589: INFO: Waiting for pod var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb to disappear
Jan 25 10:05:18.607: INFO: Pod var-expansion-f9c462b0-1507-43c7-b396-1e9d217b55fb no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:05:18.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1985" for this suite.

• [SLOW TEST:10.424 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":279,"completed":84,"skipped":1768,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:05:18.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:05:18.992: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"64b20c9e-d877-4090-a227-1daa6e190bdb", Controller:(*bool)(0xc0055284da), BlockOwnerDeletion:(*bool)(0xc0055284db)}}
Jan 25 10:05:19.004: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c6ea565b-2aca-4582-b58d-4e5ff19c879c", Controller:(*bool)(0xc004b7d21a), BlockOwnerDeletion:(*bool)(0xc004b7d21b)}}
Jan 25 10:05:19.027: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b8acc735-04a6-4768-9955-8df01cdf0837", Controller:(*bool)(0xc00552868a), BlockOwnerDeletion:(*bool)(0xc00552868b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:05:24.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7725" for this suite.

• [SLOW TEST:5.543 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":279,"completed":85,"skipped":1775,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:05:24.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0125 10:05:25.345354       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 10:05:25.345: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:05:25.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5480" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":279,"completed":86,"skipped":1783,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:05:25.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-f8cc77c5-07d4-4abf-93cc-3001f44d09e2
STEP: Creating a pod to test consume secrets
Jan 25 10:05:25.549: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6" in namespace "projected-2518" to be "success or failure"
Jan 25 10:05:25.571: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.418029ms
Jan 25 10:05:27.856: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.306805381s
Jan 25 10:05:29.883: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333658555s
Jan 25 10:05:31.891: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341865951s
Jan 25 10:05:33.901: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351734885s
Jan 25 10:05:35.915: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.365921118s
Jan 25 10:05:37.924: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.374903611s
STEP: Saw pod success
Jan 25 10:05:37.924: INFO: Pod "pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6" satisfied condition "success or failure"
Jan 25 10:05:37.928: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 10:05:38.137: INFO: Waiting for pod pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6 to disappear
Jan 25 10:05:38.159: INFO: Pod pod-projected-secrets-984aae21-7a91-4758-af84-2e548cac1fb6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:05:38.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2518" for this suite.

• [SLOW TEST:12.845 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":279,"completed":87,"skipped":1836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:05:38.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-zhn4
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 10:05:38.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zhn4" in namespace "subpath-6119" to be "success or failure"
Jan 25 10:05:38.587: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.275013ms
Jan 25 10:05:40.601: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019851783s
Jan 25 10:05:42.615: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033951166s
Jan 25 10:05:44.624: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042993693s
Jan 25 10:05:46.637: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 8.055981821s
Jan 25 10:05:48.645: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 10.063410074s
Jan 25 10:05:50.660: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 12.078454316s
Jan 25 10:05:52.667: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 14.086199727s
Jan 25 10:05:54.675: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 16.093842288s
Jan 25 10:05:56.684: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 18.102916008s
Jan 25 10:05:58.726: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 20.145107228s
Jan 25 10:06:00.733: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 22.152121845s
Jan 25 10:06:02.750: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 24.169114239s
Jan 25 10:06:04.760: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Running", Reason="", readiness=true. Elapsed: 26.178645853s
Jan 25 10:06:06.769: INFO: Pod "pod-subpath-test-projected-zhn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.18767607s
STEP: Saw pod success
Jan 25 10:06:06.769: INFO: Pod "pod-subpath-test-projected-zhn4" satisfied condition "success or failure"
Jan 25 10:06:06.779: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-zhn4 container test-container-subpath-projected-zhn4: 
STEP: delete the pod
Jan 25 10:06:06.819: INFO: Waiting for pod pod-subpath-test-projected-zhn4 to disappear
Jan 25 10:06:06.826: INFO: Pod pod-subpath-test-projected-zhn4 no longer exists
STEP: Deleting pod pod-subpath-test-projected-zhn4
Jan 25 10:06:06.826: INFO: Deleting pod "pod-subpath-test-projected-zhn4" in namespace "subpath-6119"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:06:06.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6119" for this suite.

• [SLOW TEST:28.738 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":279,"completed":88,"skipped":1861,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:06:06.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Jan 25 10:06:07.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3976'
Jan 25 10:06:07.654: INFO: stderr: ""
Jan 25 10:06:07.655: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 10:06:07.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3976'
Jan 25 10:06:07.840: INFO: stderr: ""
Jan 25 10:06:07.841: INFO: stdout: "update-demo-nautilus-j74mf update-demo-nautilus-vpjrv "
Jan 25 10:06:07.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j74mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:08.030: INFO: stderr: ""
Jan 25 10:06:08.030: INFO: stdout: ""
Jan 25 10:06:08.030: INFO: update-demo-nautilus-j74mf is created but not running
Jan 25 10:06:13.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3976'
Jan 25 10:06:13.889: INFO: stderr: ""
Jan 25 10:06:13.889: INFO: stdout: "update-demo-nautilus-j74mf update-demo-nautilus-vpjrv "
Jan 25 10:06:13.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j74mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:14.553: INFO: stderr: ""
Jan 25 10:06:14.553: INFO: stdout: ""
Jan 25 10:06:14.553: INFO: update-demo-nautilus-j74mf is created but not running
Jan 25 10:06:19.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3976'
Jan 25 10:06:19.752: INFO: stderr: ""
Jan 25 10:06:19.752: INFO: stdout: "update-demo-nautilus-j74mf update-demo-nautilus-vpjrv "
Jan 25 10:06:19.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j74mf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:19.913: INFO: stderr: ""
Jan 25 10:06:19.913: INFO: stdout: "true"
Jan 25 10:06:19.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j74mf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:20.017: INFO: stderr: ""
Jan 25 10:06:20.017: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 10:06:20.017: INFO: validating pod update-demo-nautilus-j74mf
Jan 25 10:06:20.034: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 10:06:20.034: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 10:06:20.034: INFO: update-demo-nautilus-j74mf is verified up and running
Jan 25 10:06:20.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpjrv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:20.134: INFO: stderr: ""
Jan 25 10:06:20.135: INFO: stdout: "true"
Jan 25 10:06:20.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vpjrv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:20.248: INFO: stderr: ""
Jan 25 10:06:20.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 10:06:20.248: INFO: validating pod update-demo-nautilus-vpjrv
Jan 25 10:06:20.256: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 10:06:20.256: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 10:06:20.256: INFO: update-demo-nautilus-vpjrv is verified up and running
STEP: rolling-update to new replication controller
Jan 25 10:06:20.260: INFO: scanned /root for discovery docs: 
Jan 25 10:06:20.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3976'
Jan 25 10:06:47.919: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 10:06:47.920: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 10:06:47.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3976'
Jan 25 10:06:48.116: INFO: stderr: ""
Jan 25 10:06:48.116: INFO: stdout: "update-demo-kitten-6dqcl update-demo-kitten-ncf7n update-demo-nautilus-vpjrv "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan 25 10:06:53.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3976'
Jan 25 10:06:53.316: INFO: stderr: ""
Jan 25 10:06:53.316: INFO: stdout: "update-demo-kitten-6dqcl update-demo-kitten-ncf7n "
Jan 25 10:06:53.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6dqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:53.504: INFO: stderr: ""
Jan 25 10:06:53.504: INFO: stdout: "true"
Jan 25 10:06:53.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6dqcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:53.701: INFO: stderr: ""
Jan 25 10:06:53.701: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 10:06:53.701: INFO: validating pod update-demo-kitten-6dqcl
Jan 25 10:06:53.713: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 10:06:53.713: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 10:06:53.713: INFO: update-demo-kitten-6dqcl is verified up and running
Jan 25 10:06:53.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ncf7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:53.904: INFO: stderr: ""
Jan 25 10:06:53.905: INFO: stdout: "true"
Jan 25 10:06:53.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ncf7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3976'
Jan 25 10:06:54.019: INFO: stderr: ""
Jan 25 10:06:54.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 10:06:54.020: INFO: validating pod update-demo-kitten-ncf7n
Jan 25 10:06:54.042: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 10:06:54.043: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 10:06:54.043: INFO: update-demo-kitten-ncf7n is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:06:54.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3976" for this suite.

• [SLOW TEST:47.103 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":279,"completed":89,"skipped":1892,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:06:54.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:06:54.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06" in namespace "downward-api-252" to be "success or failure"
Jan 25 10:06:54.192: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818165ms
Jan 25 10:06:56.215: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03497712s
Jan 25 10:06:58.232: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052618344s
Jan 25 10:07:01.713: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Pending", Reason="", readiness=false. Elapsed: 7.53302778s
Jan 25 10:07:03.723: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Pending", Reason="", readiness=false. Elapsed: 9.543479869s
Jan 25 10:07:05.731: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.551271235s
STEP: Saw pod success
Jan 25 10:07:05.731: INFO: Pod "downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06" satisfied condition "success or failure"
Jan 25 10:07:05.736: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06 container client-container: 
STEP: delete the pod
Jan 25 10:07:05.786: INFO: Waiting for pod downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06 to disappear
Jan 25 10:07:05.815: INFO: Pod downwardapi-volume-ec37ffd7-6bd2-464b-a428-34ae999def06 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:07:05.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-252" for this suite.

• [SLOW TEST:11.806 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":279,"completed":90,"skipped":1911,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:07:05.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-8ae3fc96-1bf7-4f59-a08e-e7240140a89a
STEP: Creating secret with name s-test-opt-upd-e3f82e2b-2524-42c3-bca4-17840eaaf0f9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8ae3fc96-1bf7-4f59-a08e-e7240140a89a
STEP: Updating secret s-test-opt-upd-e3f82e2b-2524-42c3-bca4-17840eaaf0f9
STEP: Creating secret with name s-test-opt-create-ef979ee9-4ec2-4106-96e2-1d4f3ed996d5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:08:25.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2265" for this suite.

• [SLOW TEST:79.450 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":91,"skipped":1925,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:08:25.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 25 10:08:25.454: INFO: Waiting up to 5m0s for pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079" in namespace "emptydir-3587" to be "success or failure"
Jan 25 10:08:25.461: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Pending", Reason="", readiness=false. Elapsed: 7.636533ms
Jan 25 10:08:27.481: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027433472s
Jan 25 10:08:29.508: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053868983s
Jan 25 10:08:31.519: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064952998s
Jan 25 10:08:33.527: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072742932s
Jan 25 10:08:35.533: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079040551s
STEP: Saw pod success
Jan 25 10:08:35.533: INFO: Pod "pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079" satisfied condition "success or failure"
Jan 25 10:08:35.538: INFO: Trying to get logs from node jerma-node pod pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079 container test-container: 
STEP: delete the pod
Jan 25 10:08:35.631: INFO: Waiting for pod pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079 to disappear
Jan 25 10:08:35.642: INFO: Pod pod-4ef051c0-d0ca-4cc9-8358-bd8e31f1c079 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:08:35.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3587" for this suite.

• [SLOW TEST:10.405 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":92,"skipped":1929,"failed":0}
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:08:35.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:08:35.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4558" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":279,"completed":93,"skipped":1929,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:08:36.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 25 10:08:37.610: INFO: Pod name wrapped-volume-race-c25ec758-3a02-46d0-84f5-7488566982be: Found 0 pods out of 5
Jan 25 10:08:42.621: INFO: Pod name wrapped-volume-race-c25ec758-3a02-46d0-84f5-7488566982be: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c25ec758-3a02-46d0-84f5-7488566982be in namespace emptydir-wrapper-9158, will wait for the garbage collector to delete the pods
Jan 25 10:09:08.819: INFO: Deleting ReplicationController wrapped-volume-race-c25ec758-3a02-46d0-84f5-7488566982be took: 18.685673ms
Jan 25 10:09:09.220: INFO: Terminating ReplicationController wrapped-volume-race-c25ec758-3a02-46d0-84f5-7488566982be pods took: 400.724948ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 10:09:20.384: INFO: Pod name wrapped-volume-race-2dd61289-45fa-494a-b76c-a363d20143e1: Found 0 pods out of 5
Jan 25 10:09:25.435: INFO: Pod name wrapped-volume-race-2dd61289-45fa-494a-b76c-a363d20143e1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2dd61289-45fa-494a-b76c-a363d20143e1 in namespace emptydir-wrapper-9158, will wait for the garbage collector to delete the pods
Jan 25 10:09:59.556: INFO: Deleting ReplicationController wrapped-volume-race-2dd61289-45fa-494a-b76c-a363d20143e1 took: 8.637386ms
Jan 25 10:09:59.957: INFO: Terminating ReplicationController wrapped-volume-race-2dd61289-45fa-494a-b76c-a363d20143e1 pods took: 400.792766ms
STEP: Creating RC which spawns configmap-volume pods
Jan 25 10:10:12.334: INFO: Pod name wrapped-volume-race-b0326e15-a668-4d41-8e14-d0467f18c404: Found 0 pods out of 5
Jan 25 10:10:18.342: INFO: Pod name wrapped-volume-race-b0326e15-a668-4d41-8e14-d0467f18c404: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b0326e15-a668-4d41-8e14-d0467f18c404 in namespace emptydir-wrapper-9158, will wait for the garbage collector to delete the pods
Jan 25 10:10:43.551: INFO: Deleting ReplicationController wrapped-volume-race-b0326e15-a668-4d41-8e14-d0467f18c404 took: 24.484777ms
Jan 25 10:10:44.052: INFO: Terminating ReplicationController wrapped-volume-race-b0326e15-a668-4d41-8e14-d0467f18c404 pods took: 501.604278ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:10:56.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9158" for this suite.

• [SLOW TEST:140.695 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":279,"completed":94,"skipped":1950,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:10:56.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466
STEP: creating an pod
Jan 25 10:10:56.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7816 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 25 10:10:59.057: INFO: stderr: ""
Jan 25 10:10:59.058: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Jan 25 10:10:59.058: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 25 10:10:59.058: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7816" to be "running and ready, or succeeded"
Jan 25 10:10:59.064: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.309212ms
Jan 25 10:11:01.079: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020000687s
Jan 25 10:11:03.134: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07516159s
Jan 25 10:11:05.195: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136039302s
Jan 25 10:11:07.228: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.169107709s
Jan 25 10:11:07.228: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 25 10:11:07.228: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 25 10:11:07.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816'
Jan 25 10:11:07.412: INFO: stderr: ""
Jan 25 10:11:07.412: INFO: stdout: "I0125 10:11:06.545183       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/w5v 369\nI0125 10:11:06.745304       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/z4p 497\nI0125 10:11:06.945740       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/lkx6 272\nI0125 10:11:07.145574       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/cxd 232\nI0125 10:11:07.345551       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/l4mv 560\n"
STEP: limiting log lines
Jan 25 10:11:07.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816 --tail=1'
Jan 25 10:11:07.594: INFO: stderr: ""
Jan 25 10:11:07.594: INFO: stdout: "I0125 10:11:07.545401       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/qtrm 465\n"
Jan 25 10:11:07.594: INFO: got output "I0125 10:11:07.545401       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/qtrm 465\n"
STEP: limiting log bytes
Jan 25 10:11:07.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816 --limit-bytes=1'
Jan 25 10:11:07.723: INFO: stderr: ""
Jan 25 10:11:07.723: INFO: stdout: "I"
Jan 25 10:11:07.723: INFO: got output "I"
STEP: exposing timestamps
Jan 25 10:11:07.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816 --tail=1 --timestamps'
Jan 25 10:11:07.833: INFO: stderr: ""
Jan 25 10:11:07.833: INFO: stdout: "2020-01-25T10:11:07.746724993Z I0125 10:11:07.745341       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/2k69 534\n"
Jan 25 10:11:07.833: INFO: got output "2020-01-25T10:11:07.746724993Z I0125 10:11:07.745341       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/2k69 534\n"
STEP: restricting to a time range
Jan 25 10:11:10.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816 --since=1s'
Jan 25 10:11:10.526: INFO: stderr: ""
Jan 25 10:11:10.526: INFO: stdout: "I0125 10:11:09.545593       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/trk 421\nI0125 10:11:09.745507       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/t2xt 416\nI0125 10:11:09.945649       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/t67 293\nI0125 10:11:10.145707       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/x49 570\nI0125 10:11:10.345648       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/q8wm 244\n"
Jan 25 10:11:10.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7816 --since=24h'
Jan 25 10:11:10.702: INFO: stderr: ""
Jan 25 10:11:10.702: INFO: stdout: "I0125 10:11:06.545183       1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/w5v 369\nI0125 10:11:06.745304       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/z4p 497\nI0125 10:11:06.945740       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/lkx6 272\nI0125 10:11:07.145574       1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/cxd 232\nI0125 10:11:07.345551       1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/l4mv 560\nI0125 10:11:07.545401       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/qtrm 465\nI0125 10:11:07.745341       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/2k69 534\nI0125 10:11:07.945635       1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/jhjf 465\nI0125 10:11:08.145607       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/8wp9 352\nI0125 10:11:08.345474       1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/wnd 370\nI0125 10:11:08.545705       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/jk2 346\nI0125 10:11:08.745510       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/pbv 254\nI0125 10:11:08.945741       1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/kddk 575\nI0125 10:11:09.145598       1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/krnx 510\nI0125 10:11:09.345584       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/hfqs 534\nI0125 10:11:09.545593       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/trk 421\nI0125 10:11:09.745507       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/t2xt 416\nI0125 10:11:09.945649       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/t67 293\nI0125 10:11:10.145707       1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/x49 570\nI0125 10:11:10.345648       1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/q8wm 244\nI0125 10:11:10.545742       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/2tb 526\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472
Jan 25 10:11:10.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7816'
Jan 25 10:11:23.551: INFO: stderr: ""
Jan 25 10:11:23.551: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:11:23.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7816" for this suite.

• [SLOW TEST:26.877 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":279,"completed":95,"skipped":1987,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:11:23.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 25 10:11:23.628: INFO: namespace kubectl-6235
Jan 25 10:11:23.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6235'
Jan 25 10:11:24.137: INFO: stderr: ""
Jan 25 10:11:24.137: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 25 10:11:25.152: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:25.152: INFO: Found 0 / 1
Jan 25 10:11:26.152: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:26.153: INFO: Found 0 / 1
Jan 25 10:11:27.145: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:27.145: INFO: Found 0 / 1
Jan 25 10:11:28.147: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:28.147: INFO: Found 0 / 1
Jan 25 10:11:29.149: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:29.149: INFO: Found 0 / 1
Jan 25 10:11:30.146: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:30.146: INFO: Found 0 / 1
Jan 25 10:11:31.151: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:31.151: INFO: Found 0 / 1
Jan 25 10:11:32.147: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:32.148: INFO: Found 1 / 1
Jan 25 10:11:32.148: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 10:11:32.152: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 10:11:32.152: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 10:11:32.152: INFO: wait on agnhost-master startup in kubectl-6235 
Jan 25 10:11:32.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-928d5 agnhost-master --namespace=kubectl-6235'
Jan 25 10:11:32.349: INFO: stderr: ""
Jan 25 10:11:32.349: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan 25 10:11:32.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6235'
Jan 25 10:11:32.508: INFO: stderr: ""
Jan 25 10:11:32.508: INFO: stdout: "service/rm2 exposed\n"
Jan 25 10:11:32.518: INFO: Service rm2 in namespace kubectl-6235 found.
STEP: exposing service
Jan 25 10:11:34.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6235'
Jan 25 10:11:34.723: INFO: stderr: ""
Jan 25 10:11:34.724: INFO: stdout: "service/rm3 exposed\n"
Jan 25 10:11:34.744: INFO: Service rm3 in namespace kubectl-6235 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:11:36.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6235" for this suite.

• [SLOW TEST:13.239 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":279,"completed":96,"skipped":1996,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:11:36.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 25 10:11:37.006: INFO: Waiting up to 5m0s for pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e" in namespace "emptydir-1536" to be "success or failure"
Jan 25 10:11:37.017: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.71395ms
Jan 25 10:11:39.024: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018390199s
Jan 25 10:11:41.032: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026028754s
Jan 25 10:11:43.039: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032781103s
Jan 25 10:11:45.048: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041929999s
Jan 25 10:11:47.072: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066031615s
STEP: Saw pod success
Jan 25 10:11:47.073: INFO: Pod "pod-ebd976e7-b243-4794-8780-dc49ad1e257e" satisfied condition "success or failure"
Jan 25 10:11:47.086: INFO: Trying to get logs from node jerma-node pod pod-ebd976e7-b243-4794-8780-dc49ad1e257e container test-container: 
STEP: delete the pod
Jan 25 10:11:47.160: INFO: Waiting for pod pod-ebd976e7-b243-4794-8780-dc49ad1e257e to disappear
Jan 25 10:11:47.217: INFO: Pod pod-ebd976e7-b243-4794-8780-dc49ad1e257e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:11:47.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1536" for this suite.

• [SLOW TEST:10.412 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":97,"skipped":2014,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:11:47.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 10:11:47.391: INFO: Waiting up to 5m0s for pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8" in namespace "downward-api-7249" to be "success or failure"
Jan 25 10:11:47.402: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586063ms
Jan 25 10:11:49.410: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019051026s
Jan 25 10:11:51.420: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028720625s
Jan 25 10:11:53.427: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036127556s
Jan 25 10:11:55.435: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043571366s
Jan 25 10:11:57.443: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052317294s
STEP: Saw pod success
Jan 25 10:11:57.443: INFO: Pod "downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8" satisfied condition "success or failure"
Jan 25 10:11:57.449: INFO: Trying to get logs from node jerma-node pod downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8 container dapi-container: 
STEP: delete the pod
Jan 25 10:11:57.663: INFO: Waiting for pod downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8 to disappear
Jan 25 10:11:57.672: INFO: Pod downward-api-1fcd6753-b9dd-4c0a-8d1f-136b627cfad8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:11:57.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7249" for this suite.

• [SLOW TEST:10.458 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":279,"completed":98,"skipped":2017,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:11:57.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 10:12:06.576: INFO: Successfully updated pod "labelsupdate4e7930e8-0831-41f3-b7bd-c9e17fe7e861"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:08.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7113" for this suite.

• [SLOW TEST:10.978 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":279,"completed":99,"skipped":2051,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:08.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:19.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9481" for this suite.

• [SLOW TEST:11.200 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":279,"completed":100,"skipped":2061,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:19.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 10:12:20.014: INFO: Waiting up to 5m0s for pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932" in namespace "downward-api-7638" to be "success or failure"
Jan 25 10:12:20.033: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932": Phase="Pending", Reason="", readiness=false. Elapsed: 18.702552ms
Jan 25 10:12:22.041: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027027066s
Jan 25 10:12:24.048: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03326127s
Jan 25 10:12:26.069: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054696993s
Jan 25 10:12:28.074: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059184149s
STEP: Saw pod success
Jan 25 10:12:28.074: INFO: Pod "downward-api-6cd445be-95c0-442d-ad95-6387622ac932" satisfied condition "success or failure"
Jan 25 10:12:28.077: INFO: Trying to get logs from node jerma-node pod downward-api-6cd445be-95c0-442d-ad95-6387622ac932 container dapi-container: 
STEP: delete the pod
Jan 25 10:12:28.112: INFO: Waiting for pod downward-api-6cd445be-95c0-442d-ad95-6387622ac932 to disappear
Jan 25 10:12:28.119: INFO: Pod downward-api-6cd445be-95c0-442d-ad95-6387622ac932 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:28.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7638" for this suite.

• [SLOW TEST:8.263 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":279,"completed":101,"skipped":2069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:28.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-4685f2ec-f94d-458e-a24e-52c7ae25e0e7
STEP: Creating a pod to test consume configMaps
Jan 25 10:12:28.244: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9" in namespace "projected-8550" to be "success or failure"
Jan 25 10:12:28.270: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.081635ms
Jan 25 10:12:30.285: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041660368s
Jan 25 10:12:32.298: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054374518s
Jan 25 10:12:34.305: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061529505s
Jan 25 10:12:36.318: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074032558s
STEP: Saw pod success
Jan 25 10:12:36.318: INFO: Pod "pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9" satisfied condition "success or failure"
Jan 25 10:12:36.323: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 10:12:36.439: INFO: Waiting for pod pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9 to disappear
Jan 25 10:12:36.444: INFO: Pod pod-projected-configmaps-7f3c6c9a-0712-4a33-bbc8-cd81f222d6d9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:36.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8550" for this suite.

• [SLOW TEST:8.330 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":279,"completed":102,"skipped":2097,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:36.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-c134840d-196c-4349-b0af-4e31703bb3cf
STEP: Creating a pod to test consume secrets
Jan 25 10:12:36.647: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1" in namespace "projected-6465" to be "success or failure"
Jan 25 10:12:36.665: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.066085ms
Jan 25 10:12:38.678: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031477513s
Jan 25 10:12:40.688: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041055122s
Jan 25 10:12:42.696: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049155022s
Jan 25 10:12:44.709: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06233586s
STEP: Saw pod success
Jan 25 10:12:44.710: INFO: Pod "pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1" satisfied condition "success or failure"
Jan 25 10:12:44.735: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 10:12:44.865: INFO: Waiting for pod pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1 to disappear
Jan 25 10:12:44.909: INFO: Pod pod-projected-secrets-adfcf80a-4f2c-407f-b3af-d47c3b3762d1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6465" for this suite.

• [SLOW TEST:8.462 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":103,"skipped":2131,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:44.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:12:45.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9" in namespace "downward-api-4264" to be "success or failure"
Jan 25 10:12:45.126: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.742143ms
Jan 25 10:12:47.136: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023378221s
Jan 25 10:12:49.147: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033987241s
Jan 25 10:12:51.161: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047819834s
Jan 25 10:12:53.174: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060856708s
STEP: Saw pod success
Jan 25 10:12:53.174: INFO: Pod "downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9" satisfied condition "success or failure"
Jan 25 10:12:53.181: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9 container client-container: 
STEP: delete the pod
Jan 25 10:12:53.276: INFO: Waiting for pod downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9 to disappear
Jan 25 10:12:53.293: INFO: Pod downwardapi-volume-b2d456d7-7e13-45e5-a03a-00284f3444a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:12:53.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4264" for this suite.

• [SLOW TEST:8.375 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":279,"completed":104,"skipped":2192,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:12:53.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-0819562f-50fa-431c-9a4f-35e00815e89e
STEP: Creating a pod to test consume secrets
Jan 25 10:12:53.693: INFO: Waiting up to 5m0s for pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8" in namespace "secrets-610" to be "success or failure"
Jan 25 10:12:53.755: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 61.74718ms
Jan 25 10:12:55.771: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077131395s
Jan 25 10:12:57.781: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087807081s
Jan 25 10:12:59.795: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101736327s
Jan 25 10:13:01.813: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119488757s
STEP: Saw pod success
Jan 25 10:13:01.813: INFO: Pod "pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8" satisfied condition "success or failure"
Jan 25 10:13:01.821: INFO: Trying to get logs from node jerma-node pod pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8 container secret-volume-test: 
STEP: delete the pod
Jan 25 10:13:02.205: INFO: Waiting for pod pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8 to disappear
Jan 25 10:13:02.220: INFO: Pod pod-secrets-94b8323d-ea2d-41ba-8121-9186f11cb0f8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:13:02.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-610" for this suite.

• [SLOW TEST:8.924 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":105,"skipped":2196,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:13:02.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-zdh7n in namespace proxy-3073
I0125 10:13:02.482327       9 runners.go:189] Created replication controller with name: proxy-service-zdh7n, namespace: proxy-3073, replica count: 1
I0125 10:13:03.534391       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:04.535242       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:05.536441       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:06.537433       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:07.538294       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:08.539384       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:13:09.540260       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:10.541399       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:11.542216       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:12.543810       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:13.544906       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:14.545755       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 10:13:15.546540       9 runners.go:189] proxy-service-zdh7n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 10:13:15.584: INFO: setup took 13.169387774s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 25 10:13:15.604: INFO: (0) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 19.312152ms)
Jan 25 10:13:15.604: INFO: (0) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 20.134659ms)
Jan 25 10:13:15.605: INFO: (0) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 19.851508ms)
Jan 25 10:13:15.605: INFO: (0) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 20.751862ms)
Jan 25 10:13:15.606: INFO: (0) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 21.416221ms)
Jan 25 10:13:15.606: INFO: (0) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 21.431774ms)
Jan 25 10:13:15.606: INFO: (0) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 21.473024ms)
Jan 25 10:13:15.606: INFO: (0) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 21.722833ms)
Jan 25 10:13:15.607: INFO: (0) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 22.298916ms)
Jan 25 10:13:15.607: INFO: (0) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 22.401458ms)
Jan 25 10:13:15.607: INFO: (0) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 22.407762ms)
Jan 25 10:13:15.616: INFO: (0) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 17.435411ms)
Jan 25 10:13:15.640: INFO: (1) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 19.530678ms)
Jan 25 10:13:15.640: INFO: (1) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test<... (200; 19.697919ms)
Jan 25 10:13:15.641: INFO: (1) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 21.417776ms)
Jan 25 10:13:15.641: INFO: (1) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 20.433994ms)
Jan 25 10:13:15.643: INFO: (1) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 21.747524ms)
Jan 25 10:13:15.645: INFO: (1) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 24.48183ms)
Jan 25 10:13:15.646: INFO: (1) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 25.751252ms)
Jan 25 10:13:15.647: INFO: (1) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 25.163119ms)
Jan 25 10:13:15.647: INFO: (1) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 26.162963ms)
Jan 25 10:13:15.657: INFO: (2) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 9.347969ms)
Jan 25 10:13:15.658: INFO: (2) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 9.770083ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 12.300839ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 12.254385ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 12.736997ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 12.587725ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 11.489728ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 13.716485ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 12.812773ms)
Jan 25 10:13:15.661: INFO: (2) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 12.992004ms)
Jan 25 10:13:15.662: INFO: (2) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 14.042148ms)
Jan 25 10:13:15.662: INFO: (2) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 13.725625ms)
Jan 25 10:13:15.663: INFO: (2) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 14.402501ms)
Jan 25 10:13:15.663: INFO: (2) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 15.621587ms)
Jan 25 10:13:15.667: INFO: (3) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 3.821429ms)
Jan 25 10:13:15.669: INFO: (3) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 5.205915ms)
Jan 25 10:13:15.669: INFO: (3) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 5.942298ms)
Jan 25 10:13:15.670: INFO: (3) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 6.571723ms)
Jan 25 10:13:15.675: INFO: (3) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 10.715773ms)
Jan 25 10:13:15.675: INFO: (3) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 11.16046ms)
Jan 25 10:13:15.675: INFO: (3) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 11.259348ms)
Jan 25 10:13:15.675: INFO: (3) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 11.616183ms)
Jan 25 10:13:15.675: INFO: (3) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 11.498135ms)
Jan 25 10:13:15.676: INFO: (3) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 11.6036ms)
Jan 25 10:13:15.676: INFO: (3) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 11.661003ms)
Jan 25 10:13:15.676: INFO: (3) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 10.381859ms)
Jan 25 10:13:15.688: INFO: (4) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 11.304917ms)
Jan 25 10:13:15.688: INFO: (4) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 11.250853ms)
Jan 25 10:13:15.688: INFO: (4) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 11.591717ms)
Jan 25 10:13:15.688: INFO: (4) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 11.080932ms)
Jan 25 10:13:15.690: INFO: (4) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 12.870147ms)
Jan 25 10:13:15.690: INFO: (4) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 13.249448ms)
Jan 25 10:13:15.690: INFO: (4) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 12.903518ms)
Jan 25 10:13:15.690: INFO: (4) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 10.874636ms)
Jan 25 10:13:15.703: INFO: (5) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 16.006027ms)
Jan 25 10:13:15.708: INFO: (5) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 16.107876ms)
Jan 25 10:13:15.718: INFO: (6) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 10.140589ms)
Jan 25 10:13:15.719: INFO: (6) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 11.782466ms)
Jan 25 10:13:15.721: INFO: (6) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 12.334231ms)
Jan 25 10:13:15.722: INFO: (6) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 13.638813ms)
Jan 25 10:13:15.722: INFO: (6) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 14.259184ms)
Jan 25 10:13:15.723: INFO: (6) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 14.113608ms)
Jan 25 10:13:15.723: INFO: (6) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 14.778593ms)
Jan 25 10:13:15.723: INFO: (6) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 14.961952ms)
Jan 25 10:13:15.725: INFO: (6) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 16.502821ms)
Jan 25 10:13:15.727: INFO: (6) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 18.372976ms)
Jan 25 10:13:15.729: INFO: (6) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 20.442244ms)
Jan 25 10:13:15.741: INFO: (7) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 11.867945ms)
Jan 25 10:13:15.742: INFO: (7) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 13.923617ms)
Jan 25 10:13:15.743: INFO: (7) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 14.225419ms)
Jan 25 10:13:15.744: INFO: (7) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 14.566611ms)
Jan 25 10:13:15.744: INFO: (7) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 14.835645ms)
Jan 25 10:13:15.744: INFO: (7) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 14.945626ms)
Jan 25 10:13:15.744: INFO: (7) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 15.142742ms)
Jan 25 10:13:15.745: INFO: (7) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 15.03312ms)
Jan 25 10:13:15.745: INFO: (7) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 15.515606ms)
Jan 25 10:13:15.745: INFO: (7) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 15.719104ms)
Jan 25 10:13:15.745: INFO: (7) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 15.557977ms)
Jan 25 10:13:15.745: INFO: (7) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 16.043262ms)
Jan 25 10:13:15.746: INFO: (7) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 16.644413ms)
Jan 25 10:13:15.746: INFO: (7) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 16.848691ms)
Jan 25 10:13:15.752: INFO: (8) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 5.462482ms)
Jan 25 10:13:15.757: INFO: (8) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 10.011434ms)
Jan 25 10:13:15.757: INFO: (8) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 10.523694ms)
Jan 25 10:13:15.758: INFO: (8) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 11.320312ms)
Jan 25 10:13:15.758: INFO: (8) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 11.968068ms)
Jan 25 10:13:15.759: INFO: (8) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 12.992302ms)
Jan 25 10:13:15.760: INFO: (8) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 13.20435ms)
Jan 25 10:13:15.760: INFO: (8) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 13.353722ms)
Jan 25 10:13:15.760: INFO: (8) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 13.683597ms)
Jan 25 10:13:15.761: INFO: (8) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 14.078038ms)
Jan 25 10:13:15.761: INFO: (8) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 13.942094ms)
Jan 25 10:13:15.763: INFO: (8) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 17.082171ms)
Jan 25 10:13:15.764: INFO: (8) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 17.835834ms)
Jan 25 10:13:15.774: INFO: (9) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 9.659084ms)
Jan 25 10:13:15.774: INFO: (9) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 9.95403ms)
Jan 25 10:13:15.778: INFO: (9) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 13.12191ms)
Jan 25 10:13:15.779: INFO: (9) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 14.472571ms)
Jan 25 10:13:15.779: INFO: (9) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 14.830182ms)
Jan 25 10:13:15.782: INFO: (9) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 17.357287ms)
Jan 25 10:13:15.782: INFO: (9) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test<... (200; 7.356143ms)
Jan 25 10:13:15.793: INFO: (10) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 7.581982ms)
Jan 25 10:13:15.796: INFO: (10) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 9.725294ms)
Jan 25 10:13:15.796: INFO: (10) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 9.911603ms)
Jan 25 10:13:15.796: INFO: (10) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 10.162222ms)
Jan 25 10:13:15.796: INFO: (10) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 10.319803ms)
Jan 25 10:13:15.796: INFO: (10) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 8.067578ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 8.243635ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 8.643707ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 8.742014ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 8.201314ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 8.772818ms)
Jan 25 10:13:15.811: INFO: (11) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 8.204532ms)
Jan 25 10:13:15.822: INFO: (12) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 8.462546ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 9.919067ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 9.970276ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 9.515977ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 10.495652ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 10.297881ms)
Jan 25 10:13:15.824: INFO: (12) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 10.509288ms)
Jan 25 10:13:15.825: INFO: (12) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 10.133493ms)
Jan 25 10:13:15.825: INFO: (12) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test<... (200; 12.740911ms)
Jan 25 10:13:15.841: INFO: (13) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 12.620529ms)
Jan 25 10:13:15.842: INFO: (13) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 19.02666ms)
Jan 25 10:13:15.847: INFO: (13) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 18.941418ms)
Jan 25 10:13:15.847: INFO: (13) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 18.676962ms)
Jan 25 10:13:15.847: INFO: (13) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 19.172273ms)
Jan 25 10:13:15.848: INFO: (13) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 18.850778ms)
Jan 25 10:13:15.848: INFO: (13) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 18.964016ms)
Jan 25 10:13:15.848: INFO: (13) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 19.030141ms)
Jan 25 10:13:15.870: INFO: (14) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 21.099452ms)
Jan 25 10:13:15.870: INFO: (14) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 21.681893ms)
Jan 25 10:13:15.871: INFO: (14) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 22.353218ms)
Jan 25 10:13:15.870: INFO: (14) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 22.065149ms)
Jan 25 10:13:15.871: INFO: (14) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 21.983821ms)
Jan 25 10:13:15.871: INFO: (14) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 22.430261ms)
Jan 25 10:13:15.873: INFO: (14) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 24.773602ms)
Jan 25 10:13:15.873: INFO: (14) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 24.493282ms)
Jan 25 10:13:15.874: INFO: (14) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 25.402418ms)
Jan 25 10:13:15.874: INFO: (14) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 24.968718ms)
Jan 25 10:13:15.874: INFO: (14) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 25.546252ms)
Jan 25 10:13:15.874: INFO: (14) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 25.889983ms)
Jan 25 10:13:15.874: INFO: (14) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 25.702858ms)
Jan 25 10:13:15.875: INFO: (14) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 26.116418ms)
Jan 25 10:13:15.875: INFO: (14) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 26.628703ms)
Jan 25 10:13:15.883: INFO: (15) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 8.211445ms)
Jan 25 10:13:15.883: INFO: (15) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 7.789844ms)
Jan 25 10:13:15.884: INFO: (15) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 8.409883ms)
Jan 25 10:13:15.884: INFO: (15) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 9.234644ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 9.351066ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 8.872226ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 9.679146ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 9.885311ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 9.580363ms)
Jan 25 10:13:15.885: INFO: (15) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 9.956311ms)
Jan 25 10:13:15.887: INFO: (15) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 8.536029ms)
Jan 25 10:13:15.899: INFO: (16) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 8.760064ms)
Jan 25 10:13:15.902: INFO: (16) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 11.898884ms)
Jan 25 10:13:15.902: INFO: (16) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 11.708838ms)
Jan 25 10:13:15.902: INFO: (16) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 11.791253ms)
Jan 25 10:13:15.903: INFO: (16) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 13.003448ms)
Jan 25 10:13:15.904: INFO: (16) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 13.845524ms)
Jan 25 10:13:15.904: INFO: (16) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 14.29267ms)
Jan 25 10:13:15.904: INFO: (16) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 14.301571ms)
Jan 25 10:13:15.904: INFO: (16) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 14.15786ms)
Jan 25 10:13:15.913: INFO: (17) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 7.451252ms)
Jan 25 10:13:15.913: INFO: (17) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 8.027592ms)
Jan 25 10:13:15.913: INFO: (17) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 8.224878ms)
Jan 25 10:13:15.913: INFO: (17) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 8.287961ms)
Jan 25 10:13:15.913: INFO: (17) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 8.335285ms)
Jan 25 10:13:15.914: INFO: (17) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 9.168593ms)
Jan 25 10:13:15.914: INFO: (17) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 8.860805ms)
Jan 25 10:13:15.914: INFO: (17) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: ... (200; 9.31392ms)
Jan 25 10:13:15.914: INFO: (17) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 9.104134ms)
Jan 25 10:13:15.918: INFO: (17) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 12.946531ms)
Jan 25 10:13:15.918: INFO: (17) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 12.675619ms)
Jan 25 10:13:15.918: INFO: (17) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 13.005223ms)
Jan 25 10:13:15.919: INFO: (17) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 13.881496ms)
Jan 25 10:13:15.919: INFO: (17) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 13.929896ms)
Jan 25 10:13:15.920: INFO: (17) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 14.661729ms)
Jan 25 10:13:15.927: INFO: (18) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 6.695855ms)
Jan 25 10:13:15.927: INFO: (18) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 6.814033ms)
Jan 25 10:13:15.928: INFO: (18) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 7.411108ms)
Jan 25 10:13:15.928: INFO: (18) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 7.814186ms)
Jan 25 10:13:15.930: INFO: (18) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname2/proxy/: tls qux (200; 9.518739ms)
Jan 25 10:13:15.930: INFO: (18) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 9.397525ms)
Jan 25 10:13:15.932: INFO: (18) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 11.645322ms)
Jan 25 10:13:15.933: INFO: (18) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: test (200; 12.010433ms)
Jan 25 10:13:15.933: INFO: (18) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 12.574529ms)
Jan 25 10:13:15.934: INFO: (18) /api/v1/namespaces/proxy-3073/services/https:proxy-service-zdh7n:tlsportname1/proxy/: tls baz (200; 13.546794ms)
Jan 25 10:13:15.934: INFO: (18) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname1/proxy/: foo (200; 13.203441ms)
Jan 25 10:13:15.934: INFO: (18) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 13.539853ms)
Jan 25 10:13:15.934: INFO: (18) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname2/proxy/: bar (200; 13.871159ms)
Jan 25 10:13:15.935: INFO: (18) /api/v1/namespaces/proxy-3073/services/http:proxy-service-zdh7n:portname1/proxy/: foo (200; 14.422809ms)
Jan 25 10:13:15.935: INFO: (18) /api/v1/namespaces/proxy-3073/services/proxy-service-zdh7n:portname2/proxy/: bar (200; 14.898121ms)
Jan 25 10:13:15.951: INFO: (19) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h/proxy/: test (200; 15.269036ms)
Jan 25 10:13:15.951: INFO: (19) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:462/proxy/: tls qux (200; 15.412695ms)
Jan 25 10:13:15.951: INFO: (19) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:460/proxy/: tls baz (200; 15.855492ms)
Jan 25 10:13:15.951: INFO: (19) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:1080/proxy/: ... (200; 15.83531ms)
Jan 25 10:13:15.952: INFO: (19) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 16.529714ms)
Jan 25 10:13:15.954: INFO: (19) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:162/proxy/: bar (200; 18.52701ms)
Jan 25 10:13:15.955: INFO: (19) /api/v1/namespaces/proxy-3073/pods/http:proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 19.244387ms)
Jan 25 10:13:15.955: INFO: (19) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:160/proxy/: foo (200; 19.452985ms)
Jan 25 10:13:15.955: INFO: (19) /api/v1/namespaces/proxy-3073/pods/proxy-service-zdh7n-2bq9h:1080/proxy/: test<... (200; 19.592309ms)
Jan 25 10:13:15.956: INFO: (19) /api/v1/namespaces/proxy-3073/pods/https:proxy-service-zdh7n-2bq9h:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-1188
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1188
STEP: Deleting pre-stop pod
Jan 25 10:13:53.690: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:13:53.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1188" for this suite.

• [SLOW TEST:21.279 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":279,"completed":107,"skipped":2262,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:13:53.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:13:53.985: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:14:00.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9637" for this suite.

• [SLOW TEST:6.432 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":279,"completed":108,"skipped":2293,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:14:00.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-2457
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-2457
STEP: creating replication controller externalsvc in namespace services-2457
I0125 10:14:00.418927       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2457, replica count: 2
I0125 10:14:03.471547       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:14:06.473048       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:14:09.473852       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 10:14:12.474941       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 25 10:14:12.614: INFO: Creating new exec pod
Jan 25 10:14:20.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2457 execpod5r7qv -- /bin/sh -x -c nslookup nodeport-service'
Jan 25 10:14:21.028: INFO: stderr: "I0125 10:14:20.815212    2227 log.go:172] (0xc000780790) (0xc00079e320) Create stream\nI0125 10:14:20.815337    2227 log.go:172] (0xc000780790) (0xc00079e320) Stream added, broadcasting: 1\nI0125 10:14:20.822383    2227 log.go:172] (0xc000780790) Reply frame received for 1\nI0125 10:14:20.822422    2227 log.go:172] (0xc000780790) (0xc000689c20) Create stream\nI0125 10:14:20.822427    2227 log.go:172] (0xc000780790) (0xc000689c20) Stream added, broadcasting: 3\nI0125 10:14:20.826246    2227 log.go:172] (0xc000780790) Reply frame received for 3\nI0125 10:14:20.826269    2227 log.go:172] (0xc000780790) (0xc00079e3c0) Create stream\nI0125 10:14:20.826274    2227 log.go:172] (0xc000780790) (0xc00079e3c0) Stream added, broadcasting: 5\nI0125 10:14:20.827482    2227 log.go:172] (0xc000780790) Reply frame received for 5\nI0125 10:14:20.892923    2227 log.go:172] (0xc000780790) Data frame received for 5\nI0125 10:14:20.893061    2227 log.go:172] (0xc00079e3c0) (5) Data frame handling\nI0125 10:14:20.893131    2227 log.go:172] (0xc00079e3c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0125 10:14:20.912113    2227 log.go:172] (0xc000780790) Data frame received for 3\nI0125 10:14:20.912174    2227 log.go:172] (0xc000689c20) (3) Data frame handling\nI0125 10:14:20.912197    2227 log.go:172] (0xc000689c20) (3) Data frame sent\nI0125 10:14:20.913976    2227 log.go:172] (0xc000780790) Data frame received for 3\nI0125 10:14:20.914066    2227 log.go:172] (0xc000689c20) (3) Data frame handling\nI0125 10:14:20.914094    2227 log.go:172] (0xc000689c20) (3) Data frame sent\nI0125 10:14:21.021476    2227 log.go:172] (0xc000780790) Data frame received for 1\nI0125 10:14:21.021521    2227 log.go:172] (0xc00079e320) (1) Data frame handling\nI0125 10:14:21.021534    2227 log.go:172] (0xc00079e320) (1) Data frame sent\nI0125 10:14:21.021542    2227 log.go:172] (0xc000780790) (0xc00079e320) Stream removed, broadcasting: 1\nI0125 10:14:21.021935    2227 log.go:172] (0xc000780790) (0xc000689c20) Stream removed, broadcasting: 3\nI0125 10:14:21.022030    2227 log.go:172] (0xc000780790) (0xc00079e3c0) Stream removed, broadcasting: 5\nI0125 10:14:21.022053    2227 log.go:172] (0xc000780790) (0xc00079e320) Stream removed, broadcasting: 1\nI0125 10:14:21.022060    2227 log.go:172] (0xc000780790) (0xc000689c20) Stream removed, broadcasting: 3\nI0125 10:14:21.022066    2227 log.go:172] (0xc000780790) (0xc00079e3c0) Stream removed, broadcasting: 5\nI0125 10:14:21.022164    2227 log.go:172] (0xc000780790) Go away received\n"
Jan 25 10:14:21.029: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2457.svc.cluster.local\tcanonical name = externalsvc.services-2457.svc.cluster.local.\nName:\texternalsvc.services-2457.svc.cluster.local\nAddress: 10.96.61.88\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-2457, will wait for the garbage collector to delete the pods
Jan 25 10:14:21.105: INFO: Deleting ReplicationController externalsvc took: 19.459695ms
Jan 25 10:14:21.507: INFO: Terminating ReplicationController externalsvc pods took: 401.281736ms
Jan 25 10:14:33.164: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:14:33.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2457" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:33.065 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":279,"completed":109,"skipped":2298,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:14:33.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:14:33.375: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:14:35.385: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:14:37.384: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:14:39.382: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:14:41.384: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:14:43.383: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:45.383: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:47.382: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:49.384: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:51.390: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:53.384: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:55.427: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:57.538: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = false)
Jan 25 10:14:59.382: INFO: The status of Pod test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc is Running (Ready = true)
Jan 25 10:14:59.388: INFO: Container started at 2020-01-25 10:14:40 +0000 UTC, pod became ready at 2020-01-25 10:14:58 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:14:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2363" for this suite.

• [SLOW TEST:26.171 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":279,"completed":110,"skipped":2319,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:14:59.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 25 10:14:59.511: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 10:14:59.546: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 10:14:59.552: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 10:14:59.575: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 10:14:59.575: INFO: 	Container weave ready: true, restart count 1
Jan 25 10:14:59.575: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 10:14:59.575: INFO: test-webserver-f3351d4f-4886-469a-af9b-9016526e0dfc from container-probe-2363 started at 2020-01-25 10:14:33 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.575: INFO: 	Container test-webserver ready: true, restart count 0
Jan 25 10:14:59.575: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.575: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:14:59.575: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 10:14:59.598: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:14:59.598: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:14:59.598: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 10:14:59.598: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:14:59.598: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container weave ready: true, restart count 0
Jan 25 10:14:59.598: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 10:14:59.598: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 25 10:14:59.598: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 10:14:59.598: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:14:59.598: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-edf22b68-bd4a-4526-8070-db9c40bf4004 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-edf22b68-bd4a-4526-8070-db9c40bf4004 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-edf22b68-bd4a-4526-8070-db9c40bf4004
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:15:31.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2958" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:32.610 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":279,"completed":111,"skipped":2353,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:15:32.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 25 10:15:32.153: INFO: Created pod &Pod{ObjectMeta:{dns-5358  dns-5358 /api/v1/namespaces/dns-5358/pods/dns-5358 bb5d4171-c471-4c3d-80c9-6dc0b7ecb360 4221329 0 2020-01-25 10:15:32 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5g7xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5g7xh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5g7xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:15:32.224: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:34.230: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:36.234: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:38.345: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:40.231: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:42.237: INFO: The status of Pod dns-5358 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 10:15:44.231: INFO: The status of Pod dns-5358 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 25 10:15:44.231: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5358 PodName:dns-5358 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 10:15:44.231: INFO: >>> kubeConfig: /root/.kube/config
I0125 10:15:44.297957       9 log.go:172] (0xc002be44d0) (0xc001ecf720) Create stream
I0125 10:15:44.298060       9 log.go:172] (0xc002be44d0) (0xc001ecf720) Stream added, broadcasting: 1
I0125 10:15:44.303951       9 log.go:172] (0xc002be44d0) Reply frame received for 1
I0125 10:15:44.304042       9 log.go:172] (0xc002be44d0) (0xc0019ae3c0) Create stream
I0125 10:15:44.304064       9 log.go:172] (0xc002be44d0) (0xc0019ae3c0) Stream added, broadcasting: 3
I0125 10:15:44.305488       9 log.go:172] (0xc002be44d0) Reply frame received for 3
I0125 10:15:44.305523       9 log.go:172] (0xc002be44d0) (0xc0022b0f00) Create stream
I0125 10:15:44.305538       9 log.go:172] (0xc002be44d0) (0xc0022b0f00) Stream added, broadcasting: 5
I0125 10:15:44.307287       9 log.go:172] (0xc002be44d0) Reply frame received for 5
I0125 10:15:44.417510       9 log.go:172] (0xc002be44d0) Data frame received for 3
I0125 10:15:44.417744       9 log.go:172] (0xc0019ae3c0) (3) Data frame handling
I0125 10:15:44.417827       9 log.go:172] (0xc0019ae3c0) (3) Data frame sent
I0125 10:15:44.516793       9 log.go:172] (0xc002be44d0) (0xc0019ae3c0) Stream removed, broadcasting: 3
I0125 10:15:44.517333       9 log.go:172] (0xc002be44d0) Data frame received for 1
I0125 10:15:44.517458       9 log.go:172] (0xc001ecf720) (1) Data frame handling
I0125 10:15:44.517639       9 log.go:172] (0xc001ecf720) (1) Data frame sent
I0125 10:15:44.517726       9 log.go:172] (0xc002be44d0) (0xc0022b0f00) Stream removed, broadcasting: 5
I0125 10:15:44.517884       9 log.go:172] (0xc002be44d0) (0xc001ecf720) Stream removed, broadcasting: 1
I0125 10:15:44.518011       9 log.go:172] (0xc002be44d0) Go away received
I0125 10:15:44.518755       9 log.go:172] (0xc002be44d0) (0xc001ecf720) Stream removed, broadcasting: 1
I0125 10:15:44.518798       9 log.go:172] (0xc002be44d0) (0xc0019ae3c0) Stream removed, broadcasting: 3
I0125 10:15:44.518838       9 log.go:172] (0xc002be44d0) (0xc0022b0f00) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 25 10:15:44.519: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5358 PodName:dns-5358 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 10:15:44.519: INFO: >>> kubeConfig: /root/.kube/config
I0125 10:15:44.592122       9 log.go:172] (0xc001691b80) (0xc0019ae8c0) Create stream
I0125 10:15:44.592461       9 log.go:172] (0xc001691b80) (0xc0019ae8c0) Stream added, broadcasting: 1
I0125 10:15:44.601706       9 log.go:172] (0xc001691b80) Reply frame received for 1
I0125 10:15:44.601921       9 log.go:172] (0xc001691b80) (0xc001ecf860) Create stream
I0125 10:15:44.601949       9 log.go:172] (0xc001691b80) (0xc001ecf860) Stream added, broadcasting: 3
I0125 10:15:44.603912       9 log.go:172] (0xc001691b80) Reply frame received for 3
I0125 10:15:44.603991       9 log.go:172] (0xc001691b80) (0xc002c3c1e0) Create stream
I0125 10:15:44.604023       9 log.go:172] (0xc001691b80) (0xc002c3c1e0) Stream added, broadcasting: 5
I0125 10:15:44.606366       9 log.go:172] (0xc001691b80) Reply frame received for 5
I0125 10:15:44.690563       9 log.go:172] (0xc001691b80) Data frame received for 3
I0125 10:15:44.690649       9 log.go:172] (0xc001ecf860) (3) Data frame handling
I0125 10:15:44.690677       9 log.go:172] (0xc001ecf860) (3) Data frame sent
I0125 10:15:44.753220       9 log.go:172] (0xc001691b80) Data frame received for 1
I0125 10:15:44.753325       9 log.go:172] (0xc0019ae8c0) (1) Data frame handling
I0125 10:15:44.753363       9 log.go:172] (0xc0019ae8c0) (1) Data frame sent
I0125 10:15:44.753440       9 log.go:172] (0xc001691b80) (0xc0019ae8c0) Stream removed, broadcasting: 1
I0125 10:15:44.753843       9 log.go:172] (0xc001691b80) (0xc001ecf860) Stream removed, broadcasting: 3
I0125 10:15:44.754416       9 log.go:172] (0xc001691b80) (0xc002c3c1e0) Stream removed, broadcasting: 5
I0125 10:15:44.754614       9 log.go:172] (0xc001691b80) (0xc0019ae8c0) Stream removed, broadcasting: 1
I0125 10:15:44.754704       9 log.go:172] (0xc001691b80) (0xc001ecf860) Stream removed, broadcasting: 3
I0125 10:15:44.754809       9 log.go:172] (0xc001691b80) (0xc002c3c1e0) Stream removed, broadcasting: 5
I0125 10:15:44.755479       9 log.go:172] (0xc001691b80) Go away received
Jan 25 10:15:44.755: INFO: Deleting pod dns-5358...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:15:44.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5358" for this suite.

• [SLOW TEST:12.784 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":279,"completed":112,"skipped":2356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:15:44.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 10:15:44.914: INFO: Waiting up to 5m0s for pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924" in namespace "downward-api-6744" to be "success or failure"
Jan 25 10:15:44.936: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Pending", Reason="", readiness=false. Elapsed: 22.023894ms
Jan 25 10:15:46.950: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036085194s
Jan 25 10:15:48.959: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044864677s
Jan 25 10:15:50.978: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06400078s
Jan 25 10:15:52.990: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075722802s
Jan 25 10:15:55.001: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087330883s
STEP: Saw pod success
Jan 25 10:15:55.001: INFO: Pod "downward-api-e236e828-f02f-4078-93f1-5532f0b45924" satisfied condition "success or failure"
Jan 25 10:15:55.005: INFO: Trying to get logs from node jerma-node pod downward-api-e236e828-f02f-4078-93f1-5532f0b45924 container dapi-container: 
STEP: delete the pod
Jan 25 10:15:55.120: INFO: Waiting for pod downward-api-e236e828-f02f-4078-93f1-5532f0b45924 to disappear
Jan 25 10:15:55.183: INFO: Pod downward-api-e236e828-f02f-4078-93f1-5532f0b45924 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:15:55.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6744" for this suite.

• [SLOW TEST:10.397 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":279,"completed":113,"skipped":2387,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:15:55.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-16494523-9c84-434f-81ae-a49aa3cfdde8
STEP: Creating a pod to test consume configMaps
Jan 25 10:15:55.535: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91" in namespace "projected-9705" to be "success or failure"
Jan 25 10:15:55.549: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17565ms
Jan 25 10:15:57.557: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021917524s
Jan 25 10:15:59.568: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032957017s
Jan 25 10:16:01.575: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039942033s
Jan 25 10:16:03.582: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046736659s
STEP: Saw pod success
Jan 25 10:16:03.582: INFO: Pod "pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91" satisfied condition "success or failure"
Jan 25 10:16:03.585: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 10:16:03.656: INFO: Waiting for pod pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91 to disappear
Jan 25 10:16:03.722: INFO: Pod pod-projected-configmaps-e9278356-abb0-4fb1-8f3e-69a843952a91 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:03.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9705" for this suite.

• [SLOW TEST:8.544 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":279,"completed":114,"skipped":2431,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:03.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-dec6e128-2214-40e6-ba7c-0cadbc548f1b
STEP: Creating a pod to test consume configMaps
Jan 25 10:16:03.996: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca" in namespace "projected-1564" to be "success or failure"
Jan 25 10:16:04.025: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Pending", Reason="", readiness=false. Elapsed: 29.097097ms
Jan 25 10:16:06.036: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040277602s
Jan 25 10:16:08.043: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046905979s
Jan 25 10:16:10.048: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052364342s
Jan 25 10:16:12.056: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05965582s
Jan 25 10:16:14.061: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06511515s
STEP: Saw pod success
Jan 25 10:16:14.061: INFO: Pod "pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca" satisfied condition "success or failure"
Jan 25 10:16:14.064: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 10:16:14.099: INFO: Waiting for pod pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca to disappear
Jan 25 10:16:14.114: INFO: Pod pod-projected-configmaps-c07030d5-29b4-4146-bec6-6309ec7d7aca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:14.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1564" for this suite.

• [SLOW TEST:10.440 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":115,"skipped":2431,"failed":0}
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:14.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-9df07bb0-77ff-4918-970c-e03032b01045
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:24.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-514" for this suite.

• [SLOW TEST:10.144 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":116,"skipped":2431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:24.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 25 10:16:32.482: INFO: &Pod{ObjectMeta:{send-events-6db81c71-97f8-45cf-8fd1-b95e09646140  events-4192 /api/v1/namespaces/events-4192/pods/send-events-6db81c71-97f8-45cf-8fd1-b95e09646140 acd2c9fe-0320-4eaa-8145-527bac090e0b 4221642 0 2020-01-25 10:16:24 +0000 UTC   map[name:foo time:439867106] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gxbn8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gxbn8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gxbn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:16:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:16:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:16:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 10:16:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:16:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://9f8da11131695e91056a249c63bccca7bc1e06754278a26d741e494f61b94eb3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 25 10:16:34.493: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 25 10:16:36.502: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:36.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4192" for this suite.

• [SLOW TEST:12.343 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":279,"completed":117,"skipped":2465,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:36.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-b47069ab-585d-469a-9995-fecefadf15c0
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:36.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7081" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":279,"completed":118,"skipped":2482,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:36.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:16:37.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:16:39.576: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:16:41.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:16:43.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544197, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:16:46.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:16:57.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-916" for this suite.
STEP: Destroying namespace "webhook-916-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.308 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":279,"completed":119,"skipped":2498,"failed":0}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:16:57.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 10:17:07.795: INFO: Successfully updated pod "labelsupdated254e234-d1e8-4507-ad56-82f194bbf97d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:17:11.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5470" for this suite.

• [SLOW TEST:14.760 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":279,"completed":120,"skipped":2498,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:17:11.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:17:13.472: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 25 10:17:15.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:17.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:19.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:21.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:23.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544233, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:17:26.554: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:17:26.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:17:27.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-7328" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:16.074 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":279,"completed":121,"skipped":2506,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:17:27.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 10:17:28.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7096'
Jan 25 10:17:28.196: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 10:17:28.196: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 25 10:17:28.223: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-gn7vj]
Jan 25 10:17:28.223: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-gn7vj" in namespace "kubectl-7096" to be "running and ready"
Jan 25 10:17:28.267: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 44.05697ms
Jan 25 10:17:30.276: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052400691s
Jan 25 10:17:32.283: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059949626s
Jan 25 10:17:34.291: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067364147s
Jan 25 10:17:36.299: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075570127s
Jan 25 10:17:38.314: INFO: Pod "e2e-test-httpd-rc-gn7vj": Phase="Running", Reason="", readiness=true. Elapsed: 10.09034202s
Jan 25 10:17:38.314: INFO: Pod "e2e-test-httpd-rc-gn7vj" satisfied condition "running and ready"
Jan 25 10:17:38.314: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-gn7vj]
Jan 25 10:17:38.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7096'
Jan 25 10:17:38.592: INFO: stderr: ""
Jan 25 10:17:38.592: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sat Jan 25 10:17:35.784745 2020] [mpm_event:notice] [pid 1:tid 140261633305448] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jan 25 10:17:35.784830 2020] [core:notice] [pid 1:tid 140261633305448] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639
Jan 25 10:17:38.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7096'
Jan 25 10:17:38.694: INFO: stderr: ""
Jan 25 10:17:38.694: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:17:38.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7096" for this suite.

• [SLOW TEST:10.742 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":279,"completed":122,"skipped":2507,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:17:38.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:17:48.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-513" for this suite.

• [SLOW TEST:10.266 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":123,"skipped":2516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:17:49.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:17:49.628: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:17:51.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:53.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:55.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:17:57.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715544269, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:18:00.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:18:00.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-201" for this suite.
STEP: Destroying namespace "webhook-201-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.020 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":279,"completed":124,"skipped":2556,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:18:01.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 10:18:01.112: INFO: Waiting up to 5m0s for pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4" in namespace "emptydir-1801" to be "success or failure"
Jan 25 10:18:01.134: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.148577ms
Jan 25 10:18:03.144: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031165474s
Jan 25 10:18:05.167: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054836176s
Jan 25 10:18:07.176: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063846478s
Jan 25 10:18:09.206: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093892955s
Jan 25 10:18:11.216: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103304941s
STEP: Saw pod success
Jan 25 10:18:11.216: INFO: Pod "pod-18e6359c-115f-4ef9-8127-728c9b2c6da4" satisfied condition "success or failure"
Jan 25 10:18:11.222: INFO: Trying to get logs from node jerma-node pod pod-18e6359c-115f-4ef9-8127-728c9b2c6da4 container test-container: 
STEP: delete the pod
Jan 25 10:18:11.264: INFO: Waiting for pod pod-18e6359c-115f-4ef9-8127-728c9b2c6da4 to disappear
Jan 25 10:18:11.269: INFO: Pod pod-18e6359c-115f-4ef9-8127-728c9b2c6da4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:18:11.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1801" for this suite.

• [SLOW TEST:10.314 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":125,"skipped":2562,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:18:11.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 25 10:18:11.566: INFO: Waiting up to 5m0s for pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5" in namespace "emptydir-2872" to be "success or failure"
Jan 25 10:18:11.577: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.622525ms
Jan 25 10:18:13.594: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02849686s
Jan 25 10:18:15.604: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038594937s
Jan 25 10:18:17.613: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046783337s
Jan 25 10:18:19.626: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06050469s
STEP: Saw pod success
Jan 25 10:18:19.627: INFO: Pod "pod-03585342-2daf-4098-b5d2-5fe9604743c5" satisfied condition "success or failure"
Jan 25 10:18:19.633: INFO: Trying to get logs from node jerma-node pod pod-03585342-2daf-4098-b5d2-5fe9604743c5 container test-container: 
STEP: delete the pod
Jan 25 10:18:19.704: INFO: Waiting for pod pod-03585342-2daf-4098-b5d2-5fe9604743c5 to disappear
Jan 25 10:18:19.715: INFO: Pod pod-03585342-2daf-4098-b5d2-5fe9604743c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:18:19.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2872" for this suite.

• [SLOW TEST:8.400 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":126,"skipped":2562,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:18:19.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 in namespace container-probe-9332
Jan 25 10:18:32.001: INFO: Started pod liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 in namespace container-probe-9332
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 10:18:32.004: INFO: Initial restart count of pod liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is 0
Jan 25 10:18:48.096: INFO: Restart count of pod container-probe-9332/liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is now 1 (16.091993422s elapsed)
Jan 25 10:19:08.192: INFO: Restart count of pod container-probe-9332/liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is now 2 (36.18822668s elapsed)
Jan 25 10:19:28.285: INFO: Restart count of pod container-probe-9332/liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is now 3 (56.280928165s elapsed)
Jan 25 10:19:46.379: INFO: Restart count of pod container-probe-9332/liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is now 4 (1m14.375081875s elapsed)
Jan 25 10:20:58.833: INFO: Restart count of pod container-probe-9332/liveness-0c0fbef0-a705-4777-8103-b4c76eb8bd45 is now 5 (2m26.828849128s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:20:58.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9332" for this suite.

• [SLOW TEST:159.195 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":279,"completed":127,"skipped":2572,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:20:58.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 25 10:20:59.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:21:21.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6471" for this suite.

• [SLOW TEST:22.462 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":279,"completed":128,"skipped":2592,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:21:21.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 25 10:21:21.477: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 10:21:21.488: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 10:21:21.491: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 10:21:21.512: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 10:21:21.512: INFO: 	Container weave ready: true, restart count 1
Jan 25 10:21:21.512: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 10:21:21.512: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.512: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:21:21.512: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 10:21:21.532: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 25 10:21:21.532: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 10:21:21.532: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container etcd ready: true, restart count 1
Jan 25 10:21:21.532: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:21:21.532: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:21:21.532: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 10:21:21.532: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:21:21.532: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 10:21:21.532: INFO: 	Container weave ready: true, restart count 0
Jan 25 10:21:21.532: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.682: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 25 10:21:21.683: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 25 10:21:21.683: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 25 10:21:21.683: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Jan 25 10:21:21.683: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 25 10:21:21.702: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-adf72061-7330-42a0-a0e0-596875f1d588.15ed1a0baf682ae0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4279/filler-pod-adf72061-7330-42a0-a0e0-596875f1d588 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-adf72061-7330-42a0-a0e0-596875f1d588.15ed1a0c892a41f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-adf72061-7330-42a0-a0e0-596875f1d588.15ed1a0d61a03e5e], Reason = [Created], Message = [Created container filler-pod-adf72061-7330-42a0-a0e0-596875f1d588]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-adf72061-7330-42a0-a0e0-596875f1d588.15ed1a0d7b332531], Reason = [Started], Message = [Started container filler-pod-adf72061-7330-42a0-a0e0-596875f1d588]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ae085598-46d7-4a41-a733-83099363e782.15ed1a0bb332de53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4279/filler-pod-ae085598-46d7-4a41-a733-83099363e782 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ae085598-46d7-4a41-a733-83099363e782.15ed1a0ca69fd993], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ae085598-46d7-4a41-a733-83099363e782.15ed1a0d75b2aa71], Reason = [Created], Message = [Created container filler-pod-ae085598-46d7-4a41-a733-83099363e782]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ae085598-46d7-4a41-a733-83099363e782.15ed1a0d9439d90f], Reason = [Started], Message = [Started container filler-pod-ae085598-46d7-4a41-a733-83099363e782]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed1a0e0b4a683a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ed1a0e13ca2bfb], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:21:33.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4279" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:11.706 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":279,"completed":129,"skipped":2592,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:21:33.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 25 10:21:33.218: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 10:21:33.236: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 10:21:33.240: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 10:21:33.254: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.254: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:21:33.254: INFO: filler-pod-adf72061-7330-42a0-a0e0-596875f1d588 from sched-pred-4279 started at 2020-01-25 10:21:21 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.254: INFO: 	Container filler-pod-adf72061-7330-42a0-a0e0-596875f1d588 ready: true, restart count 0
Jan 25 10:21:33.254: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 10:21:33.254: INFO: 	Container weave ready: true, restart count 1
Jan 25 10:21:33.254: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 10:21:33.254: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 10:21:33.265: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:21:33.265: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container coredns ready: true, restart count 0
Jan 25 10:21:33.265: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 10:21:33.265: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 10:21:33.265: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container weave ready: true, restart count 0
Jan 25 10:21:33.265: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 10:21:33.265: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 25 10:21:33.265: INFO: filler-pod-ae085598-46d7-4a41-a733-83099363e782 from sched-pred-4279 started at 2020-01-25 10:21:22 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container filler-pod-ae085598-46d7-4a41-a733-83099363e782 ready: true, restart count 0
Jan 25 10:21:33.265: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 10:21:33.265: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 10:21:33.265: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-df1786cf-b573-417b-b8a8-0d272be10713 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-df1786cf-b573-417b-b8a8-0d272be10713 off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label kubernetes.io/e2e-df1786cf-b573-417b-b8a8-0d272be10713
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:26:53.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6767" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:320.685 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":279,"completed":130,"skipped":2598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:26:53.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:26:53.909: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:26:54.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1619" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":279,"completed":131,"skipped":2627,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:26:55.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:26:55.146: INFO: Creating ReplicaSet my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc
Jan 25 10:26:55.208: INFO: Pod name my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc: Found 0 pods out of 1
Jan 25 10:27:00.536: INFO: Pod name my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc: Found 1 pods out of 1
Jan 25 10:27:00.536: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc" is running
Jan 25 10:27:02.571: INFO: Pod "my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc-sfs5p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 10:26:55 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 10:26:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 10:26:55 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 10:26:55 +0000 UTC Reason: Message:}])
Jan 25 10:27:02.572: INFO: Trying to dial the pod
Jan 25 10:27:07.653: INFO: Controller my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc: Got expected result from replica 1 [my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc-sfs5p]: "my-hostname-basic-9fa87d95-4dae-4414-b336-373c5c9f87cc-sfs5p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:27:07.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8629" for this suite.

• [SLOW TEST:12.639 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":279,"completed":132,"skipped":2653,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:27:07.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:27:07.805: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421" in namespace "security-context-test-8734" to be "success or failure"
Jan 25 10:27:07.850: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Pending", Reason="", readiness=false. Elapsed: 44.405854ms
Jan 25 10:27:09.862: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055958924s
Jan 25 10:27:11.870: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064363553s
Jan 25 10:27:13.880: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073914385s
Jan 25 10:27:15.892: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086349815s
Jan 25 10:27:17.904: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098629594s
Jan 25 10:27:17.905: INFO: Pod "busybox-readonly-false-9b454293-5341-4f82-9144-232da4b93421" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:27:17.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8734" for this suite.

• [SLOW TEST:10.235 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":279,"completed":133,"skipped":2664,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:27:17.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 10:27:28.689: INFO: Successfully updated pod "annotationupdateb38b3faa-bbed-49ab-a7d2-77afbff223c9"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:27:30.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6267" for this suite.

• [SLOW TEST:12.832 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":279,"completed":134,"skipped":2676,"failed":0}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:27:30.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-3969
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[]
Jan 25 10:27:30.887: INFO: Get endpoints failed (11.173394ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 25 10:27:31.900: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[] (1.023894736s elapsed)
STEP: Creating pod pod1 in namespace services-3969
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod1:[80]]
Jan 25 10:27:36.114: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.190439492s elapsed, will retry)
Jan 25 10:27:41.184: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.26056608s elapsed, will retry)
Jan 25 10:27:42.193: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod1:[80]] (10.269599941s elapsed)
STEP: Creating pod pod2 in namespace services-3969
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 25 10:27:46.831: INFO: Unexpected endpoints: found map[87a2554b-ba35-4e28-9aa2-0b8db212860c:[80]], expected map[pod1:[80] pod2:[80]] (4.630195143s elapsed, will retry)
Jan 25 10:27:49.947: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod1:[80] pod2:[80]] (7.746564125s elapsed)
STEP: Deleting pod pod1 in namespace services-3969
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[pod2:[80]]
Jan 25 10:27:49.985: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[pod2:[80]] (27.368085ms elapsed)
STEP: Deleting pod pod2 in namespace services-3969
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3969 to expose endpoints map[]
Jan 25 10:27:50.139: INFO: successfully validated that service endpoint-test2 in namespace services-3969 exposes endpoints map[] (126.728674ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:27:50.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3969" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:19.597 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":279,"completed":135,"skipped":2677,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:27:50.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:27:50.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871" in namespace "projected-5917" to be "success or failure"
Jan 25 10:27:50.501: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 21.531679ms
Jan 25 10:27:52.803: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324275892s
Jan 25 10:27:54.812: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332637169s
Jan 25 10:27:56.820: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341285935s
Jan 25 10:27:58.833: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 8.353741842s
Jan 25 10:28:00.842: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Pending", Reason="", readiness=false. Elapsed: 10.362590467s
Jan 25 10:28:02.861: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.381952184s
STEP: Saw pod success
Jan 25 10:28:02.861: INFO: Pod "downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871" satisfied condition "success or failure"
Jan 25 10:28:02.870: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871 container client-container: 
STEP: delete the pod
Jan 25 10:28:02.933: INFO: Waiting for pod downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871 to disappear
Jan 25 10:28:02.947: INFO: Pod downwardapi-volume-8a9c372b-0177-4019-872d-9717dcbd1871 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:28:02.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5917" for this suite.

• [SLOW TEST:12.612 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":279,"completed":136,"skipped":2685,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:28:02.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 25 10:28:03.217: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6570 /api/v1/namespaces/watch-6570/configmaps/e2e-watch-test-resource-version e3cec7f0-ef57-41cc-b404-0355bbb8d255 4223929 0 2020-01-25 10:28:03 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 10:28:03.217: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-6570 /api/v1/namespaces/watch-6570/configmaps/e2e-watch-test-resource-version e3cec7f0-ef57-41cc-b404-0355bbb8d255 4223930 0 2020-01-25 10:28:03 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:28:03.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6570" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":279,"completed":137,"skipped":2698,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:28:03.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 25 10:28:03.351: INFO: >>> kubeConfig: /root/.kube/config
Jan 25 10:28:07.396: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:28:20.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5720" for this suite.

• [SLOW TEST:17.616 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":279,"completed":138,"skipped":2719,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:28:20.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 25 10:28:20.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 25 10:28:34.723: INFO: >>> kubeConfig: /root/.kube/config
Jan 25 10:28:38.161: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:28:53.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-417" for this suite.

• [SLOW TEST:32.446 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":279,"completed":139,"skipped":2732,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:28:53.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5911, will wait for the garbage collector to delete the pods
Jan 25 10:29:03.520: INFO: Deleting Job.batch foo took: 10.9222ms
Jan 25 10:29:03.921: INFO: Terminating Job.batch foo pods took: 401.11686ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:29:52.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5911" for this suite.

• [SLOW TEST:59.153 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":279,"completed":140,"skipped":2756,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:29:52.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 25 10:30:01.170: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3699 pod-service-account-165e2101-72af-45f3-a862-a4fc168a2d54 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 25 10:30:03.530: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3699 pod-service-account-165e2101-72af-45f3-a862-a4fc168a2d54 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 25 10:30:03.857: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3699 pod-service-account-165e2101-72af-45f3-a862-a4fc168a2d54 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:30:04.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3699" for this suite.

• [SLOW TEST:11.808 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":279,"completed":141,"skipped":2835,"failed":0}
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:30:04.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 10:30:11.571: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:30:11.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3835" for this suite.

• [SLOW TEST:7.392 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":279,"completed":142,"skipped":2836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:30:11.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-aff653af-6a27-4e13-a794-521805761d51
STEP: Creating a pod to test consume configMaps
Jan 25 10:30:11.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9" in namespace "projected-8657" to be "success or failure"
Jan 25 10:30:11.935: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.156166ms
Jan 25 10:30:13.945: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043977569s
Jan 25 10:30:15.957: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055834759s
Jan 25 10:30:17.969: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068098361s
Jan 25 10:30:19.987: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08614602s
Jan 25 10:30:21.995: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093626202s
Jan 25 10:30:24.001: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.100094677s
STEP: Saw pod success
Jan 25 10:30:24.002: INFO: Pod "pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9" satisfied condition "success or failure"
Jan 25 10:30:24.005: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 10:30:24.146: INFO: Waiting for pod pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9 to disappear
Jan 25 10:30:24.156: INFO: Pod pod-projected-configmaps-497aae54-aea0-429b-ab5b-ebc26f65dbf9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:30:24.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8657" for this suite.

• [SLOW TEST:12.514 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":143,"skipped":2864,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:30:24.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:30:24.314: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 25 10:30:29.321: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 10:30:33.333: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 25 10:30:41.405: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-2555 /apis/apps/v1/namespaces/deployment-2555/deployments/test-cleanup-deployment 0dad122b-845b-4b66-aef0-661a5c1c226e 4224536 1 2020-01-25 10:30:33 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d9e998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 10:30:33 +0000 UTC,LastTransitionTime:2020-01-25 10:30:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-01-25 10:30:39 +0000 UTC,LastTransitionTime:2020-01-25 10:30:33 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 10:30:41.411: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-2555 /apis/apps/v1/namespaces/deployment-2555/replicasets/test-cleanup-deployment-55ffc6b7b6 850c7a1b-e822-42c1-8dbd-93c502f7a6ec 4224524 1 2020-01-25 10:30:33 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0dad122b-845b-4b66-aef0-661a5c1c226e 0xc003d9ee17 0xc003d9ee18}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003d9ee88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 10:30:41.415: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-g99q6" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-g99q6 test-cleanup-deployment-55ffc6b7b6- deployment-2555 /api/v1/namespaces/deployment-2555/pods/test-cleanup-deployment-55ffc6b7b6-g99q6 99f8efb5-f6be-4be7-a8e9-0a271b062f97 4224523 0 2020-01-25 10:30:33 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 850c7a1b-e822-42c1-8dbd-93c502f7a6ec 0xc003d62557 0xc003d62558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9cx4s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9cx4s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9cx4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:30:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:30:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:30:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:30:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 10:30:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:30:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://fe8992252114e4fd2cea8053c1d0b405a6942481cd360682152438de5beb453e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:30:41.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2555" for this suite.

• [SLOW TEST:17.260 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":279,"completed":144,"skipped":2869,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:30:41.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:30:42.270: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:30:44.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:30:46.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:30:48.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:30:50.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:30:52.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545042, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:30:55.531: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:31:07.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2186" for this suite.
STEP: Destroying namespace "webhook-2186-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:26.557 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":279,"completed":145,"skipped":2936,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:31:07.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:31:18.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6188" for this suite.

• [SLOW TEST:10.122 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":279,"completed":146,"skipped":2961,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:31:18.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9206
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-9206
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9206
Jan 25 10:31:18.252: INFO: Found 0 stateful pods, waiting for 1
Jan 25 10:31:28.265: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 25 10:31:28.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:31:28.704: INFO: stderr: "I0125 10:31:28.466149    2369 log.go:172] (0xc000104370) (0xc00083a000) Create stream\nI0125 10:31:28.466361    2369 log.go:172] (0xc000104370) (0xc00083a000) Stream added, broadcasting: 1\nI0125 10:31:28.476675    2369 log.go:172] (0xc000104370) Reply frame received for 1\nI0125 10:31:28.476726    2369 log.go:172] (0xc000104370) (0xc000543b80) Create stream\nI0125 10:31:28.476739    2369 log.go:172] (0xc000104370) (0xc000543b80) Stream added, broadcasting: 3\nI0125 10:31:28.478067    2369 log.go:172] (0xc000104370) Reply frame received for 3\nI0125 10:31:28.478102    2369 log.go:172] (0xc000104370) (0xc000314780) Create stream\nI0125 10:31:28.478124    2369 log.go:172] (0xc000104370) (0xc000314780) Stream added, broadcasting: 5\nI0125 10:31:28.479405    2369 log.go:172] (0xc000104370) Reply frame received for 5\nI0125 10:31:28.584181    2369 log.go:172] (0xc000104370) Data frame received for 5\nI0125 10:31:28.584283    2369 log.go:172] (0xc000314780) (5) Data frame handling\nI0125 10:31:28.584400    2369 log.go:172] (0xc000314780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:31:28.609667    2369 log.go:172] (0xc000104370) Data frame received for 3\nI0125 10:31:28.609724    2369 log.go:172] (0xc000543b80) (3) Data frame handling\nI0125 10:31:28.609739    2369 log.go:172] (0xc000543b80) (3) Data frame sent\nI0125 10:31:28.698043    2369 log.go:172] (0xc000104370) (0xc000543b80) Stream removed, broadcasting: 3\nI0125 10:31:28.698174    2369 log.go:172] (0xc000104370) Data frame received for 1\nI0125 10:31:28.698224    2369 log.go:172] (0xc000104370) (0xc000314780) Stream removed, broadcasting: 5\nI0125 10:31:28.698253    2369 log.go:172] (0xc00083a000) (1) Data frame handling\nI0125 10:31:28.698261    2369 log.go:172] (0xc00083a000) (1) Data frame sent\nI0125 10:31:28.698273    2369 log.go:172] (0xc000104370) (0xc00083a000) Stream removed, broadcasting: 1\nI0125 10:31:28.698281    2369 log.go:172] (0xc000104370) Go away received\nI0125 10:31:28.698858    2369 log.go:172] (0xc000104370) (0xc00083a000) Stream removed, broadcasting: 1\nI0125 10:31:28.698895    2369 log.go:172] (0xc000104370) (0xc000543b80) Stream removed, broadcasting: 3\nI0125 10:31:28.698906    2369 log.go:172] (0xc000104370) (0xc000314780) Stream removed, broadcasting: 5\n"
Jan 25 10:31:28.704: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:31:28.704: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:31:28.741: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:31:28.741: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:31:28.780: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 10:31:28.780: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:28.780: INFO: 
Jan 25 10:31:28.780: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 25 10:31:29.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.970648669s
Jan 25 10:31:31.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.820049502s
Jan 25 10:31:32.099: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.701807515s
Jan 25 10:31:34.229: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.652150944s
Jan 25 10:31:35.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.521954306s
Jan 25 10:31:36.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.51353609s
Jan 25 10:31:37.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.500281327s
Jan 25 10:31:38.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 488.492623ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9206
Jan 25 10:31:39.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:31:39.674: INFO: stderr: "I0125 10:31:39.499651    2387 log.go:172] (0xc000664b00) (0xc0006e7ea0) Create stream\nI0125 10:31:39.499811    2387 log.go:172] (0xc000664b00) (0xc0006e7ea0) Stream added, broadcasting: 1\nI0125 10:31:39.504000    2387 log.go:172] (0xc000664b00) Reply frame received for 1\nI0125 10:31:39.504038    2387 log.go:172] (0xc000664b00) (0xc00065c780) Create stream\nI0125 10:31:39.504048    2387 log.go:172] (0xc000664b00) (0xc00065c780) Stream added, broadcasting: 3\nI0125 10:31:39.505561    2387 log.go:172] (0xc000664b00) Reply frame received for 3\nI0125 10:31:39.505624    2387 log.go:172] (0xc000664b00) (0xc0006e7f40) Create stream\nI0125 10:31:39.505641    2387 log.go:172] (0xc000664b00) (0xc0006e7f40) Stream added, broadcasting: 5\nI0125 10:31:39.508601    2387 log.go:172] (0xc000664b00) Reply frame received for 5\nI0125 10:31:39.589337    2387 log.go:172] (0xc000664b00) Data frame received for 3\nI0125 10:31:39.589395    2387 log.go:172] (0xc00065c780) (3) Data frame handling\nI0125 10:31:39.589403    2387 log.go:172] (0xc00065c780) (3) Data frame sent\nI0125 10:31:39.589426    2387 log.go:172] (0xc000664b00) Data frame received for 5\nI0125 10:31:39.589433    2387 log.go:172] (0xc0006e7f40) (5) Data frame handling\nI0125 10:31:39.589444    2387 log.go:172] (0xc0006e7f40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:31:39.663540    2387 log.go:172] (0xc000664b00) Data frame received for 1\nI0125 10:31:39.663596    2387 log.go:172] (0xc000664b00) (0xc00065c780) Stream removed, broadcasting: 3\nI0125 10:31:39.663621    2387 log.go:172] (0xc0006e7ea0) (1) Data frame handling\nI0125 10:31:39.663634    2387 log.go:172] (0xc0006e7ea0) (1) Data frame sent\nI0125 10:31:39.663655    2387 log.go:172] (0xc000664b00) (0xc0006e7f40) Stream removed, broadcasting: 5\nI0125 10:31:39.663704    2387 log.go:172] (0xc000664b00) (0xc0006e7ea0) Stream removed, broadcasting: 1\nI0125 10:31:39.663717    2387 log.go:172] (0xc000664b00) Go away received\nI0125 10:31:39.664080    2387 log.go:172] (0xc000664b00) (0xc0006e7ea0) Stream removed, broadcasting: 1\nI0125 10:31:39.664096    2387 log.go:172] (0xc000664b00) (0xc00065c780) Stream removed, broadcasting: 3\nI0125 10:31:39.664103    2387 log.go:172] (0xc000664b00) (0xc0006e7f40) Stream removed, broadcasting: 5\n"
Jan 25 10:31:39.675: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:31:39.675: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:31:39.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:31:40.038: INFO: stderr: "I0125 10:31:39.835470    2410 log.go:172] (0xc0009afa20) (0xc000b54780) Create stream\nI0125 10:31:39.835547    2410 log.go:172] (0xc0009afa20) (0xc000b54780) Stream added, broadcasting: 1\nI0125 10:31:39.846389    2410 log.go:172] (0xc0009afa20) Reply frame received for 1\nI0125 10:31:39.846421    2410 log.go:172] (0xc0009afa20) (0xc0005d86e0) Create stream\nI0125 10:31:39.846429    2410 log.go:172] (0xc0009afa20) (0xc0005d86e0) Stream added, broadcasting: 3\nI0125 10:31:39.847904    2410 log.go:172] (0xc0009afa20) Reply frame received for 3\nI0125 10:31:39.847930    2410 log.go:172] (0xc0009afa20) (0xc000743360) Create stream\nI0125 10:31:39.847943    2410 log.go:172] (0xc0009afa20) (0xc000743360) Stream added, broadcasting: 5\nI0125 10:31:39.849034    2410 log.go:172] (0xc0009afa20) Reply frame received for 5\nI0125 10:31:39.923924    2410 log.go:172] (0xc0009afa20) Data frame received for 5\nI0125 10:31:39.923979    2410 log.go:172] (0xc000743360) (5) Data frame handling\nI0125 10:31:39.924004    2410 log.go:172] (0xc000743360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:31:39.924791    2410 log.go:172] (0xc0009afa20) Data frame received for 5\nI0125 10:31:39.924828    2410 log.go:172] (0xc000743360) (5) Data frame handling\nI0125 10:31:39.924850    2410 log.go:172] (0xc000743360) (5) Data frame sent\nI0125 10:31:39.924869    2410 log.go:172] (0xc0009afa20) Data frame received for 3\nI0125 10:31:39.924898    2410 log.go:172] (0xc0005d86e0) (3) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0125 10:31:39.924922    2410 log.go:172] (0xc0005d86e0) (3) Data frame sent\nI0125 10:31:39.925140    2410 log.go:172] (0xc0009afa20) Data frame received for 5\nI0125 10:31:39.925162    2410 log.go:172] (0xc000743360) (5) Data frame handling\nI0125 10:31:39.925177    2410 log.go:172] (0xc000743360) (5) Data frame sent\n+ true\nI0125 10:31:40.030721    2410 log.go:172] (0xc0009afa20) (0xc0005d86e0) Stream removed, broadcasting: 3\nI0125 10:31:40.030847    2410 log.go:172] (0xc0009afa20) Data frame received for 1\nI0125 10:31:40.030870    2410 log.go:172] (0xc0009afa20) (0xc000743360) Stream removed, broadcasting: 5\nI0125 10:31:40.030909    2410 log.go:172] (0xc000b54780) (1) Data frame handling\nI0125 10:31:40.030932    2410 log.go:172] (0xc000b54780) (1) Data frame sent\nI0125 10:31:40.030939    2410 log.go:172] (0xc0009afa20) (0xc000b54780) Stream removed, broadcasting: 1\nI0125 10:31:40.030953    2410 log.go:172] (0xc0009afa20) Go away received\nI0125 10:31:40.031344    2410 log.go:172] (0xc0009afa20) (0xc000b54780) Stream removed, broadcasting: 1\nI0125 10:31:40.031399    2410 log.go:172] (0xc0009afa20) (0xc0005d86e0) Stream removed, broadcasting: 3\nI0125 10:31:40.031411    2410 log.go:172] (0xc0009afa20) (0xc000743360) Stream removed, broadcasting: 5\n"
Jan 25 10:31:40.039: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:31:40.039: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:31:40.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:31:40.340: INFO: stderr: "I0125 10:31:40.179739    2429 log.go:172] (0xc000ba82c0) (0xc000634a00) Create stream\nI0125 10:31:40.179859    2429 log.go:172] (0xc000ba82c0) (0xc000634a00) Stream added, broadcasting: 1\nI0125 10:31:40.182661    2429 log.go:172] (0xc000ba82c0) Reply frame received for 1\nI0125 10:31:40.182693    2429 log.go:172] (0xc000ba82c0) (0xc000701e00) Create stream\nI0125 10:31:40.182700    2429 log.go:172] (0xc000ba82c0) (0xc000701e00) Stream added, broadcasting: 3\nI0125 10:31:40.183771    2429 log.go:172] (0xc000ba82c0) Reply frame received for 3\nI0125 10:31:40.183800    2429 log.go:172] (0xc000ba82c0) (0xc000c060a0) Create stream\nI0125 10:31:40.183815    2429 log.go:172] (0xc000ba82c0) (0xc000c060a0) Stream added, broadcasting: 5\nI0125 10:31:40.185802    2429 log.go:172] (0xc000ba82c0) Reply frame received for 5\nI0125 10:31:40.255219    2429 log.go:172] (0xc000ba82c0) Data frame received for 5\nI0125 10:31:40.255287    2429 log.go:172] (0xc000c060a0) (5) Data frame handling\nI0125 10:31:40.255302    2429 log.go:172] (0xc000c060a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 10:31:40.255318    2429 log.go:172] (0xc000ba82c0) Data frame received for 3\nI0125 10:31:40.255325    2429 log.go:172] (0xc000701e00) (3) Data frame handling\nI0125 10:31:40.255343    2429 log.go:172] (0xc000701e00) (3) Data frame sent\nI0125 10:31:40.330619    2429 log.go:172] (0xc000ba82c0) Data frame received for 1\nI0125 10:31:40.330720    2429 log.go:172] (0xc000ba82c0) (0xc000701e00) Stream removed, broadcasting: 3\nI0125 10:31:40.330795    2429 log.go:172] (0xc000ba82c0) (0xc000c060a0) Stream removed, broadcasting: 5\nI0125 10:31:40.331007    2429 log.go:172] (0xc000634a00) (1) Data frame handling\nI0125 10:31:40.331039    2429 log.go:172] (0xc000634a00) (1) Data frame sent\nI0125 10:31:40.331052    2429 log.go:172] (0xc000ba82c0) (0xc000634a00) Stream removed, broadcasting: 1\nI0125 10:31:40.331073    2429 log.go:172] (0xc000ba82c0) Go away received\nI0125 10:31:40.331691    2429 log.go:172] (0xc000ba82c0) (0xc000634a00) Stream removed, broadcasting: 1\nI0125 10:31:40.331720    2429 log.go:172] (0xc000ba82c0) (0xc000701e00) Stream removed, broadcasting: 3\nI0125 10:31:40.331739    2429 log.go:172] (0xc000ba82c0) (0xc000c060a0) Stream removed, broadcasting: 5\n"
Jan 25 10:31:40.340: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:31:40.340: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:31:40.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:31:40.346: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:31:40.346: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 25 10:31:40.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:31:40.744: INFO: stderr: "I0125 10:31:40.478110    2448 log.go:172] (0xc000c30000) (0xc000cb4500) Create stream\nI0125 10:31:40.478287    2448 log.go:172] (0xc000c30000) (0xc000cb4500) Stream added, broadcasting: 1\nI0125 10:31:40.482864    2448 log.go:172] (0xc000c30000) Reply frame received for 1\nI0125 10:31:40.482925    2448 log.go:172] (0xc000c30000) (0xc000c88320) Create stream\nI0125 10:31:40.482935    2448 log.go:172] (0xc000c30000) (0xc000c88320) Stream added, broadcasting: 3\nI0125 10:31:40.484037    2448 log.go:172] (0xc000c30000) Reply frame received for 3\nI0125 10:31:40.484062    2448 log.go:172] (0xc000c30000) (0xc0009a4000) Create stream\nI0125 10:31:40.484069    2448 log.go:172] (0xc000c30000) (0xc0009a4000) Stream added, broadcasting: 5\nI0125 10:31:40.485186    2448 log.go:172] (0xc000c30000) Reply frame received for 5\nI0125 10:31:40.594170    2448 log.go:172] (0xc000c30000) Data frame received for 3\nI0125 10:31:40.594323    2448 log.go:172] (0xc000c88320) (3) Data frame handling\nI0125 10:31:40.594352    2448 log.go:172] (0xc000c88320) (3) Data frame sent\nI0125 10:31:40.595388    2448 log.go:172] (0xc000c30000) Data frame received for 5\nI0125 10:31:40.595442    2448 log.go:172] (0xc0009a4000) (5) Data frame handling\nI0125 10:31:40.595476    2448 log.go:172] (0xc0009a4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:31:40.731844    2448 log.go:172] (0xc000c30000) (0xc000c88320) Stream removed, broadcasting: 3\nI0125 10:31:40.732018    2448 log.go:172] (0xc000c30000) Data frame received for 1\nI0125 10:31:40.732039    2448 log.go:172] (0xc000cb4500) (1) Data frame handling\nI0125 10:31:40.732060    2448 log.go:172] (0xc000cb4500) (1) Data frame sent\nI0125 10:31:40.732181    2448 log.go:172] (0xc000c30000) (0xc000cb4500) Stream removed, broadcasting: 1\nI0125 10:31:40.732720    2448 log.go:172] (0xc000c30000) (0xc0009a4000) Stream removed, broadcasting: 5\nI0125 10:31:40.732829    2448 log.go:172] (0xc000c30000) Go away received\nI0125 10:31:40.732981    2448 log.go:172] (0xc000c30000) (0xc000cb4500) Stream removed, broadcasting: 1\nI0125 10:31:40.733045    2448 log.go:172] (0xc000c30000) (0xc000c88320) Stream removed, broadcasting: 3\nI0125 10:31:40.733061    2448 log.go:172] (0xc000c30000) (0xc0009a4000) Stream removed, broadcasting: 5\n"
Jan 25 10:31:40.744: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:31:40.744: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:31:40.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:31:41.098: INFO: stderr: "I0125 10:31:40.874812    2468 log.go:172] (0xc00093b3f0) (0xc0009301e0) Create stream\nI0125 10:31:40.874898    2468 log.go:172] (0xc00093b3f0) (0xc0009301e0) Stream added, broadcasting: 1\nI0125 10:31:40.878751    2468 log.go:172] (0xc00093b3f0) Reply frame received for 1\nI0125 10:31:40.878795    2468 log.go:172] (0xc00093b3f0) (0xc000607cc0) Create stream\nI0125 10:31:40.878807    2468 log.go:172] (0xc00093b3f0) (0xc000607cc0) Stream added, broadcasting: 3\nI0125 10:31:40.880011    2468 log.go:172] (0xc00093b3f0) Reply frame received for 3\nI0125 10:31:40.880027    2468 log.go:172] (0xc00093b3f0) (0xc00057c8c0) Create stream\nI0125 10:31:40.880032    2468 log.go:172] (0xc00093b3f0) (0xc00057c8c0) Stream added, broadcasting: 5\nI0125 10:31:40.880953    2468 log.go:172] (0xc00093b3f0) Reply frame received for 5\nI0125 10:31:40.947421    2468 log.go:172] (0xc00093b3f0) Data frame received for 5\nI0125 10:31:40.947475    2468 log.go:172] (0xc00057c8c0) (5) Data frame handling\nI0125 10:31:40.947503    2468 log.go:172] (0xc00057c8c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:31:40.975989    2468 log.go:172] (0xc00093b3f0) Data frame received for 3\nI0125 10:31:40.976020    2468 log.go:172] (0xc000607cc0) (3) Data frame handling\nI0125 10:31:40.976036    2468 log.go:172] (0xc000607cc0) (3) Data frame sent\nI0125 10:31:41.091126    2468 log.go:172] (0xc00093b3f0) (0xc000607cc0) Stream removed, broadcasting: 3\nI0125 10:31:41.091405    2468 log.go:172] (0xc00093b3f0) Data frame received for 1\nI0125 10:31:41.091454    2468 log.go:172] (0xc00093b3f0) (0xc00057c8c0) Stream removed, broadcasting: 5\nI0125 10:31:41.091494    2468 log.go:172] (0xc0009301e0) (1) Data frame handling\nI0125 10:31:41.091507    2468 log.go:172] (0xc0009301e0) (1) Data frame sent\nI0125 10:31:41.091513    2468 log.go:172] (0xc00093b3f0) (0xc0009301e0) Stream removed, broadcasting: 1\nI0125 10:31:41.091522    2468 log.go:172] (0xc00093b3f0) Go away received\nI0125 10:31:41.092072    2468 log.go:172] (0xc00093b3f0) (0xc0009301e0) Stream removed, broadcasting: 1\nI0125 10:31:41.092086    2468 log.go:172] (0xc00093b3f0) (0xc000607cc0) Stream removed, broadcasting: 3\nI0125 10:31:41.092093    2468 log.go:172] (0xc00093b3f0) (0xc00057c8c0) Stream removed, broadcasting: 5\n"
Jan 25 10:31:41.099: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:31:41.099: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:31:41.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:31:41.565: INFO: stderr: "I0125 10:31:41.267944    2485 log.go:172] (0xc0009de000) (0xc0009c6000) Create stream\nI0125 10:31:41.268071    2485 log.go:172] (0xc0009de000) (0xc0009c6000) Stream added, broadcasting: 1\nI0125 10:31:41.272086    2485 log.go:172] (0xc0009de000) Reply frame received for 1\nI0125 10:31:41.272139    2485 log.go:172] (0xc0009de000) (0xc0009c61e0) Create stream\nI0125 10:31:41.272149    2485 log.go:172] (0xc0009de000) (0xc0009c61e0) Stream added, broadcasting: 3\nI0125 10:31:41.274060    2485 log.go:172] (0xc0009de000) Reply frame received for 3\nI0125 10:31:41.274086    2485 log.go:172] (0xc0009de000) (0xc0009c6280) Create stream\nI0125 10:31:41.274094    2485 log.go:172] (0xc0009de000) (0xc0009c6280) Stream added, broadcasting: 5\nI0125 10:31:41.275717    2485 log.go:172] (0xc0009de000) Reply frame received for 5\nI0125 10:31:41.347739    2485 log.go:172] (0xc0009de000) Data frame received for 5\nI0125 10:31:41.347781    2485 log.go:172] (0xc0009c6280) (5) Data frame handling\nI0125 10:31:41.347797    2485 log.go:172] (0xc0009c6280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:31:41.444612    2485 log.go:172] (0xc0009de000) Data frame received for 3\nI0125 10:31:41.444802    2485 log.go:172] (0xc0009c61e0) (3) Data frame handling\nI0125 10:31:41.444884    2485 log.go:172] (0xc0009c61e0) (3) Data frame sent\nI0125 10:31:41.537872    2485 log.go:172] (0xc0009de000) (0xc0009c61e0) Stream removed, broadcasting: 3\nI0125 10:31:41.538084    2485 log.go:172] (0xc0009de000) Data frame received for 1\nI0125 10:31:41.538136    2485 log.go:172] (0xc0009de000) (0xc0009c6280) Stream removed, broadcasting: 5\nI0125 10:31:41.538219    2485 log.go:172] (0xc0009c6000) (1) Data frame handling\nI0125 10:31:41.538254    2485 log.go:172] (0xc0009c6000) (1) Data frame sent\nI0125 10:31:41.538277    2485 log.go:172] (0xc0009de000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0125 10:31:41.538291    2485 log.go:172] (0xc0009de000) Go away received\nI0125 10:31:41.551122    2485 log.go:172] (0xc0009de000) (0xc0009c6000) Stream removed, broadcasting: 1\nI0125 10:31:41.551325    2485 log.go:172] (0xc0009de000) (0xc0009c61e0) Stream removed, broadcasting: 3\nI0125 10:31:41.551376    2485 log.go:172] (0xc0009de000) (0xc0009c6280) Stream removed, broadcasting: 5\n"
Jan 25 10:31:41.565: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:31:41.565: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:31:41.565: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:31:41.572: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 10:31:51.586: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:31:51.586: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:31:51.586: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:31:51.612: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:51.612: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:51.612: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:51.612: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:51.613: INFO: 
Jan 25 10:31:51.613: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:53.125: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:53.125: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:53.126: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:53.126: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:53.126: INFO: 
Jan 25 10:31:53.126: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:54.134: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:54.134: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:54.134: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:54.134: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:54.134: INFO: 
Jan 25 10:31:54.134: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:55.533: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:55.534: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:55.534: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:55.534: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:55.534: INFO: 
Jan 25 10:31:55.534: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:56.550: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:56.551: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:56.551: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:56.551: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:56.551: INFO: 
Jan 25 10:31:56.551: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:57.584: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:57.584: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:57.584: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:57.584: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:57.584: INFO: 
Jan 25 10:31:57.584: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:58.599: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:58.599: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:58.600: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:58.600: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:58.600: INFO: 
Jan 25 10:31:58.600: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:31:59.609: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:31:59.609: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:18 +0000 UTC  }]
Jan 25 10:31:59.609: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:59.609: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:31:59.609: INFO: 
Jan 25 10:31:59.609: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 10:32:00.617: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 10:32:00.617: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:32:00.617: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:42 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 10:31:28 +0000 UTC  }]
Jan 25 10:32:00.618: INFO: 
Jan 25 10:32:00.618: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9206
Jan 25 10:32:01.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:01.892: INFO: rc: 1
Jan 25 10:32:01.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 25 10:32:11.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:12.039: INFO: rc: 1
Jan 25 10:32:12.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:32:22.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:22.162: INFO: rc: 1
Jan 25 10:32:22.163: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:32:32.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:32.327: INFO: rc: 1
Jan 25 10:32:32.327: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:32:42.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:42.486: INFO: rc: 1
Jan 25 10:32:42.486: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:32:52.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:32:52.621: INFO: rc: 1
Jan 25 10:32:52.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:02.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:02.721: INFO: rc: 1
Jan 25 10:33:02.721: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:12.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:12.814: INFO: rc: 1
Jan 25 10:33:12.815: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:22.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:22.996: INFO: rc: 1
Jan 25 10:33:22.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:32.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:33.178: INFO: rc: 1
Jan 25 10:33:33.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:43.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:43.433: INFO: rc: 1
Jan 25 10:33:43.433: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:33:53.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:33:53.567: INFO: rc: 1
Jan 25 10:33:53.568: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:03.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:03.730: INFO: rc: 1
Jan 25 10:34:03.730: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:13.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:13.890: INFO: rc: 1
Jan 25 10:34:13.891: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:23.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:24.012: INFO: rc: 1
Jan 25 10:34:24.012: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:34.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:34.111: INFO: rc: 1
Jan 25 10:34:34.111: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:44.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:44.263: INFO: rc: 1
Jan 25 10:34:44.264: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:34:54.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:34:54.408: INFO: rc: 1
Jan 25 10:34:54.408: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:04.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:04.570: INFO: rc: 1
Jan 25 10:35:04.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:14.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:14.713: INFO: rc: 1
Jan 25 10:35:14.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:24.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:24.845: INFO: rc: 1
Jan 25 10:35:24.846: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:34.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:35.004: INFO: rc: 1
Jan 25 10:35:35.004: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:45.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:45.186: INFO: rc: 1
Jan 25 10:35:45.186: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:35:55.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:35:55.361: INFO: rc: 1
Jan 25 10:35:55.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:05.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:05.537: INFO: rc: 1
Jan 25 10:36:05.538: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:15.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:15.676: INFO: rc: 1
Jan 25 10:36:15.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:25.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:25.907: INFO: rc: 1
Jan 25 10:36:25.907: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:35.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:36.098: INFO: rc: 1
Jan 25 10:36:36.099: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:46.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:46.216: INFO: rc: 1
Jan 25 10:36:46.216: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:36:56.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:36:56.378: INFO: rc: 1
Jan 25 10:36:56.378: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 10:37:06.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9206 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:37:06.592: INFO: rc: 1
Jan 25 10:37:06.593: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jan 25 10:37:06.593: INFO: Scaling statefulset ss to 0
Jan 25 10:37:06.619: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 25 10:37:06.625: INFO: Deleting all statefulset in ns statefulset-9206
Jan 25 10:37:06.630: INFO: Scaling statefulset ss to 0
Jan 25 10:37:06.642: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:37:06.645: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:37:06.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9206" for this suite.

• [SLOW TEST:348.595 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":279,"completed":147,"skipped":2984,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:37:06.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 10:37:20.947: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:21.059: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:23.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:23.074: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:25.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:25.069: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:27.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:27.069: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:29.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:29.072: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:31.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:31.069: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 10:37:33.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 10:37:33.096: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:37:33.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1640" for this suite.

• [SLOW TEST:26.438 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":279,"completed":148,"skipped":2988,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:37:33.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0125 10:37:34.460202       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 10:37:34.460: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:37:34.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-117" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":279,"completed":149,"skipped":2996,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:37:34.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5370
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5370
STEP: Creating statefulset with conflicting port in namespace statefulset-5370
STEP: Waiting until pod test-pod will start running in namespace statefulset-5370
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5370
Jan 25 10:37:52.695: INFO: Observed stateful pod in namespace: statefulset-5370, name: ss-0, uid: 9311f009-d427-4091-a4b1-23db9c8124a8, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 10:37:52.700: INFO: Observed stateful pod in namespace: statefulset-5370, name: ss-0, uid: 9311f009-d427-4091-a4b1-23db9c8124a8, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 10:37:52.755: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5370
STEP: Removing pod with conflicting port in namespace statefulset-5370
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5370 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 25 10:38:02.883: INFO: Deleting all statefulset in ns statefulset-5370
Jan 25 10:38:02.889: INFO: Scaling statefulset ss to 0
Jan 25 10:38:12.963: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:38:12.968: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:38:12.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5370" for this suite.

• [SLOW TEST:38.547 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":279,"completed":150,"skipped":2999,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:38:13.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-4kzs
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 10:38:13.139: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4kzs" in namespace "subpath-2546" to be "success or failure"
Jan 25 10:38:13.147: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 7.625203ms
Jan 25 10:38:15.155: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016095046s
Jan 25 10:38:17.173: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033595613s
Jan 25 10:38:19.191: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051462411s
Jan 25 10:38:21.200: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 8.061157226s
Jan 25 10:38:23.209: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 10.069857833s
Jan 25 10:38:25.224: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 12.08476403s
Jan 25 10:38:27.232: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 14.092637687s
Jan 25 10:38:29.241: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 16.101793875s
Jan 25 10:38:31.249: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 18.110188744s
Jan 25 10:38:33.256: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 20.117034821s
Jan 25 10:38:35.265: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 22.125595407s
Jan 25 10:38:37.277: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 24.137632077s
Jan 25 10:38:39.285: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 26.146035593s
Jan 25 10:38:41.294: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Running", Reason="", readiness=true. Elapsed: 28.154614547s
Jan 25 10:38:43.302: INFO: Pod "pod-subpath-test-configmap-4kzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.163175392s
STEP: Saw pod success
Jan 25 10:38:43.303: INFO: Pod "pod-subpath-test-configmap-4kzs" satisfied condition "success or failure"
Jan 25 10:38:43.307: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-4kzs container test-container-subpath-configmap-4kzs: 
STEP: delete the pod
Jan 25 10:38:43.654: INFO: Waiting for pod pod-subpath-test-configmap-4kzs to disappear
Jan 25 10:38:43.663: INFO: Pod pod-subpath-test-configmap-4kzs no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4kzs
Jan 25 10:38:43.663: INFO: Deleting pod "pod-subpath-test-configmap-4kzs" in namespace "subpath-2546"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:38:43.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2546" for this suite.

• [SLOW TEST:30.663 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":279,"completed":151,"skipped":3013,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:38:43.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-96c2d37f-8521-46b2-8964-43b903cabf92
STEP: Creating a pod to test consume configMaps
Jan 25 10:38:44.058: INFO: Waiting up to 5m0s for pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960" in namespace "configmap-9162" to be "success or failure"
Jan 25 10:38:44.071: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960": Phase="Pending", Reason="", readiness=false. Elapsed: 12.242539ms
Jan 25 10:38:46.077: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019061328s
Jan 25 10:38:48.085: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026567909s
Jan 25 10:38:50.096: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037492788s
Jan 25 10:38:52.101: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042839975s
STEP: Saw pod success
Jan 25 10:38:52.101: INFO: Pod "pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960" satisfied condition "success or failure"
Jan 25 10:38:52.105: INFO: Trying to get logs from node jerma-node pod pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960 container configmap-volume-test: 
STEP: delete the pod
Jan 25 10:38:52.224: INFO: Waiting for pod pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960 to disappear
Jan 25 10:38:52.232: INFO: Pod pod-configmaps-39a38512-7043-48aa-8585-ce9ea1acb960 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:38:52.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9162" for this suite.

• [SLOW TEST:8.559 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":279,"completed":152,"skipped":3019,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:38:52.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-b217b27b-a124-4e04-8031-ab825153fdb3
STEP: Creating a pod to test consume configMaps
Jan 25 10:38:52.502: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1" in namespace "projected-8236" to be "success or failure"
Jan 25 10:38:52.516: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.677094ms
Jan 25 10:38:54.527: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025486434s
Jan 25 10:38:56.543: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041456501s
Jan 25 10:38:58.556: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054080133s
Jan 25 10:39:00.567: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064916133s
Jan 25 10:39:02.577: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074718201s
STEP: Saw pod success
Jan 25 10:39:02.577: INFO: Pod "pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1" satisfied condition "success or failure"
Jan 25 10:39:02.580: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 10:39:02.646: INFO: Waiting for pod pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1 to disappear
Jan 25 10:39:02.651: INFO: Pod pod-projected-configmaps-2cad2544-2154-4468-bb00-510ea2d2d2f1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:39:02.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8236" for this suite.

• [SLOW TEST:10.419 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":279,"completed":153,"skipped":3020,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:39:02.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-4dbc533c-b257-4b30-a834-8a655bda9cd3
STEP: Creating a pod to test consume configMaps
Jan 25 10:39:02.751: INFO: Waiting up to 5m0s for pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14" in namespace "configmap-896" to be "success or failure"
Jan 25 10:39:02.756: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14": Phase="Pending", Reason="", readiness=false. Elapsed: 5.060565ms
Jan 25 10:39:04.762: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011488364s
Jan 25 10:39:06.769: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01869635s
Jan 25 10:39:08.793: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042451013s
Jan 25 10:39:10.801: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04985623s
STEP: Saw pod success
Jan 25 10:39:10.801: INFO: Pod "pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14" satisfied condition "success or failure"
Jan 25 10:39:10.805: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14 container configmap-volume-test: 
STEP: delete the pod
Jan 25 10:39:10.853: INFO: Waiting for pod pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14 to disappear
Jan 25 10:39:10.865: INFO: Pod pod-configmaps-9401e265-b952-42e7-aace-8b661c352b14 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:39:10.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-896" for this suite.

• [SLOW TEST:8.213 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":279,"completed":154,"skipped":3048,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:39:10.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0125 10:39:53.767562       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 10:39:53.767: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:39:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1535" for this suite.

• [SLOW TEST:42.905 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":279,"completed":155,"skipped":3074,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:39:53.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:39:53.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70" in namespace "downward-api-685" to be "success or failure"
Jan 25 10:39:53.912: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 10.425157ms
Jan 25 10:39:55.924: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022567567s
Jan 25 10:39:57.932: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03041271s
Jan 25 10:40:00.006: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10521532s
Jan 25 10:40:02.793: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.891673289s
Jan 25 10:40:05.872: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 11.971032564s
Jan 25 10:40:08.267: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 14.365888127s
Jan 25 10:40:10.275: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 16.373554414s
Jan 25 10:40:12.314: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 18.412456604s
Jan 25 10:40:14.321: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 20.420041073s
Jan 25 10:40:16.329: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Pending", Reason="", readiness=false. Elapsed: 22.427878434s
Jan 25 10:40:18.338: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.436710899s
STEP: Saw pod success
Jan 25 10:40:18.338: INFO: Pod "downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70" satisfied condition "success or failure"
Jan 25 10:40:18.343: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70 container client-container: 
STEP: delete the pod
Jan 25 10:40:18.378: INFO: Waiting for pod downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70 to disappear
Jan 25 10:40:18.382: INFO: Pod downwardapi-volume-7bb05d67-17a6-4522-a736-258163f70e70 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:40:18.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-685" for this suite.

• [SLOW TEST:24.608 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":156,"skipped":3078,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:40:18.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:40:19.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:40:21.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:40:23.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:40:25.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545619, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:40:28.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 25 10:40:36.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2090 to-be-attached-pod -i -c=container1'
Jan 25 10:40:38.744: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:40:38.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2090" for this suite.
STEP: Destroying namespace "webhook-2090-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:20.568 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":279,"completed":157,"skipped":3080,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:40:38.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 10:40:39.066: INFO: Waiting up to 5m0s for pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120" in namespace "emptydir-4620" to be "success or failure"
Jan 25 10:40:39.073: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 7.194759ms
Jan 25 10:40:41.082: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016442029s
Jan 25 10:40:43.090: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024294656s
Jan 25 10:40:45.107: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04074051s
Jan 25 10:40:47.117: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051236433s
Jan 25 10:40:49.125: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058859972s
Jan 25 10:40:51.134: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067808192s
Jan 25 10:40:53.139: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.073113911s
STEP: Saw pod success
Jan 25 10:40:53.139: INFO: Pod "pod-0769cbd1-d698-4740-b3a3-38c0db654120" satisfied condition "success or failure"
Jan 25 10:40:53.143: INFO: Trying to get logs from node jerma-node pod pod-0769cbd1-d698-4740-b3a3-38c0db654120 container test-container: 
STEP: delete the pod
Jan 25 10:40:53.209: INFO: Waiting for pod pod-0769cbd1-d698-4740-b3a3-38c0db654120 to disappear
Jan 25 10:40:53.224: INFO: Pod pod-0769cbd1-d698-4740-b3a3-38c0db654120 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:40:53.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4620" for this suite.

• [SLOW TEST:14.270 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":158,"skipped":3085,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:40:53.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:40:53.963: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:40:55.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545654, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:40:57.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545654, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:40:59.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545654, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:41:01.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545654, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715545653, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:41:05.087: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:41:05.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2967" for this suite.
STEP: Destroying namespace "webhook-2967-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.682 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":279,"completed":159,"skipped":3104,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:41:05.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:41:06.025: INFO: Creating deployment "webserver-deployment"
Jan 25 10:41:06.034: INFO: Waiting for observed generation 1
Jan 25 10:41:08.248: INFO: Waiting for all required pods to come up
Jan 25 10:41:08.941: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 25 10:41:35.261: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 25 10:41:35.276: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 25 10:41:35.286: INFO: Updating deployment webserver-deployment
Jan 25 10:41:35.286: INFO: Waiting for observed generation 2
Jan 25 10:41:37.641: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 25 10:41:38.602: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 25 10:41:38.613: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 25 10:41:39.398: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 25 10:41:39.398: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 25 10:41:39.500: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 25 10:41:39.512: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 25 10:41:39.513: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 25 10:41:39.521: INFO: Updating deployment webserver-deployment
Jan 25 10:41:39.522: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 25 10:41:40.745: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 25 10:41:41.111: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 25 10:41:45.489: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1246 /apis/apps/v1/namespaces/deployment-1246/deployments/webserver-deployment 195872ae-8f5f-44b4-a739-1221b6aed975 4227169 3 2020-01-25 10:41:06 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003e87d18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-25 10:41:37 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 10:41:40 +0000 UTC,LastTransitionTime:2020-01-25 10:41:40 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 25 10:41:47.133: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-1246 /apis/apps/v1/namespaces/deployment-1246/replicasets/webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 4227238 3 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 195872ae-8f5f-44b4-a739-1221b6aed975 0xc0004e6ee7 0xc0004e6ee8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0004e6f88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 10:41:47.133: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 25 10:41:47.133: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-1246 /apis/apps/v1/namespaces/deployment-1246/replicasets/webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 4227237 3 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 195872ae-8f5f-44b4-a739-1221b6aed975 0xc0004e6d77 0xc0004e6d78}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0004e6de8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 25 10:41:48.982: INFO: Pod "webserver-deployment-595b5b9587-7l7bv" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7l7bv webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-7l7bv b853d6e3-3081-41fb-a079-73c14f0b472e 4227184 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc000058787 0xc000058788}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.983: INFO: Pod "webserver-deployment-595b5b9587-b2jgc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-b2jgc webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-b2jgc ec54a749-3073-4e50-8c03-6a29a55b4372 4227241 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc000059297 0xc000059298}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 10:41:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.984: INFO: Pod "webserver-deployment-595b5b9587-cttxk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-cttxk webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-cttxk 1b19ace7-16ed-4972-b940-19bbd6984b19 4227061 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc000059ed7 0xc000059ed8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://13f7157746f744432ddfdbb927d0285087fad15fd237632042ec000ca2d06b21,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.984: INFO: Pod "webserver-deployment-595b5b9587-dhghr" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dhghr webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-dhghr d9696bdc-f5be-4c04-928b-410c3904c9a2 4227222 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0160 0xc0011d0161}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.985: INFO: Pod "webserver-deployment-595b5b9587-hcsf4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hcsf4 webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-hcsf4 9ab68df2-c375-47fa-a1d1-24c3dec09d7b 4227198 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0277 0xc0011d0278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.985: INFO: Pod "webserver-deployment-595b5b9587-jdnxw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jdnxw webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-jdnxw b3d0207f-9867-4c7e-b687-295485db1db3 4227235 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d03c7 0xc0011d03c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 10:41:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.986: INFO: Pod "webserver-deployment-595b5b9587-kz6gb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kz6gb webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-kz6gb bea5a89f-c9a0-44ff-8150-ae574dea27f2 4227088 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0527 0xc0011d0528}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dc95f6f346532700b3b115fa239c56ff6ffbad1ab32e21f5a836e322570d2b6c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.986: INFO: Pod "webserver-deployment-595b5b9587-kzj52" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kzj52 webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-kzj52 08072fab-7f35-4879-904d-b0db70fdc224 4227186 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d06a0 0xc0011d06a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.987: INFO: Pod "webserver-deployment-595b5b9587-m5mfh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-m5mfh webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-m5mfh 7a887c88-b096-4a0b-85e5-52dad37c72d8 4227214 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d07b7 0xc0011d07b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.987: INFO: Pod "webserver-deployment-595b5b9587-mgbmf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mgbmf webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-mgbmf afeb8c26-fba6-4341-8323-79b7611a3414 4227052 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d08d7 0xc0011d08d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6889d7c1556fe715218bfc31bbf1d52f1f0bf9325dd2a6c4e9de617494a7fa24,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.987: INFO: Pod "webserver-deployment-595b5b9587-mzqs2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mzqs2 webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-mzqs2 0febdffa-20c9-4189-8053-43297b584774 4227201 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0a40 0xc0011d0a41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.988: INFO: Pod "webserver-deployment-595b5b9587-n6gkt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-n6gkt webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-n6gkt 14ca5b70-136c-4571-b830-74770c62901d 4227094 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0c57 0xc0011d0c58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://880aab7e9cdbfe546c78351dae121136495e6319887f22e2bf0271bdf7f5ff4e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.988: INFO: Pod "webserver-deployment-595b5b9587-nhhmd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nhhmd webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-nhhmd 14023592-9bd1-4ace-a951-b56807a5f344 4227055 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d0fc0 0xc0011d0fc1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5db9c6a57bbc39a25752c14577a3b414cb745d76855097b796b9089bae033e78,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.989: INFO: Pod "webserver-deployment-595b5b9587-prk2x" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-prk2x webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-prk2x 2d04175b-9360-42ac-8551-9c405656f0e7 4227079 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d13a0 0xc0011d13a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://7109bcbdbe00f2d7ff9e4cae483cc82ce35f961f1f3ee506b9464ef753bac919,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.989: INFO: Pod "webserver-deployment-595b5b9587-rmwcm" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rmwcm webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-rmwcm 72b59d93-2b62-4d9d-a9b5-9d887fe569b0 4227218 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d1780 0xc0011d1781}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.989: INFO: Pod "webserver-deployment-595b5b9587-smrjh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-smrjh webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-smrjh 01a0ead7-b760-4586-8ab2-30f570811d0d 4227204 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d19b7 0xc0011d19b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.990: INFO: Pod "webserver-deployment-595b5b9587-t52cc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t52cc webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-t52cc ee23ec29-bb2e-456c-891c-d63e54bef9c3 4227058 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d1c27 0xc0011d1c28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3f2d2b224aa76881347d09b1b4ddd134b90de53e0f1db58e02af5a8b150cd926,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.990: INFO: Pod "webserver-deployment-595b5b9587-tv2dh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tv2dh webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-tv2dh c67fa3ae-7421-4bf6-9fd8-f84c6d90664a 4227210 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0011d1f50 0xc0011d1f51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.991: INFO: Pod "webserver-deployment-595b5b9587-x52m5" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-x52m5 webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-x52m5 81512def-a133-40ef-b69e-97d08a47e302 4227064 0 2020-01-25 10:41:06 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0027943b7 0xc0027943b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-25 10:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 10:41:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6fc22a749642ad3826e1d0a1598bc42f2693e5350c23084223a184fa0cb028fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.991: INFO: Pod "webserver-deployment-595b5b9587-xbfq7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbfq7 webserver-deployment-595b5b9587- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-595b5b9587-xbfq7 03af36ee-b6bc-4eef-9004-10679c8e5fee 4227221 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d04724f2-5cb5-48de-b429-5f48ab2f4771 0xc0027947d0 0xc0027947d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.992: INFO: Pod "webserver-deployment-c7997dcc8-2nvnm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2nvnm webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-2nvnm 1dc27c94-900e-4d6e-ad05-8e6b03ce0304 4227155 0 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002794a87 0xc002794a88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 10:41:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.992: INFO: Pod "webserver-deployment-c7997dcc8-7jn9v" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7jn9v webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-7jn9v 2b53d291-bcb6-415c-ac4c-234390088e81 4227127 0 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002794d87 0xc002794d88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 10:41:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.993: INFO: Pod "webserver-deployment-c7997dcc8-hvffk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hvffk webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-hvffk 7f22bc71-2cfd-417b-ba4b-8feed133adee 4227203 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc0027950c7 0xc0027950c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.993: INFO: Pod "webserver-deployment-c7997dcc8-js6l4" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-js6l4 webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-js6l4 f5d2b6c5-7dd4-46a1-b773-3db1a717ced2 4227243 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002795297 0xc002795298}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 10:41:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.994: INFO: Pod "webserver-deployment-c7997dcc8-kw46m" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kw46m webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-kw46m 3c53a242-1b82-4fc7-94ae-dc3b8c32499a 4227190 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc0027955f7 0xc0027955f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.994: INFO: Pod "webserver-deployment-c7997dcc8-l6j2f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l6j2f webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-l6j2f e4b26bed-25ef-4c7f-bd85-1c7dc3d04f5b 4227200 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002795827 0xc002795828}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.994: INFO: Pod "webserver-deployment-c7997dcc8-lqxwb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lqxwb webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-lqxwb 54848321-a269-4fc0-8adf-62e6b2facf1a 4227231 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002795ae7 0xc002795ae8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 10:41:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.995: INFO: Pod "webserver-deployment-c7997dcc8-r2lmc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r2lmc webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-r2lmc 0ea6fb88-83f1-471d-b7f1-2e7b8fdd0ed1 4227156 0 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc002795e07 0xc002795e08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 10:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.995: INFO: Pod "webserver-deployment-c7997dcc8-rsp2h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rsp2h webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-rsp2h b074d43a-08c3-42ee-853c-63ded158f01b 4227211 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc00310a0c7 0xc00310a0c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.996: INFO: Pod "webserver-deployment-c7997dcc8-smmww" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-smmww webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-smmww d44955e5-1967-4939-bb8a-c7d3ea18e665 4227133 0 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc00310a1e7 0xc00310a1e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 10:41:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.996: INFO: Pod "webserver-deployment-c7997dcc8-tw24w" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tw24w webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-tw24w 61e1b829-6a86-4988-bca4-2c163ba65a2e 4227202 0 2020-01-25 10:41:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc00310a367 0xc00310a368}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.996: INFO: Pod "webserver-deployment-c7997dcc8-tz7k9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tz7k9 webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-tz7k9 1a8a135e-0d42-4689-8fcc-dd5ceb75da96 4227175 0 2020-01-25 10:41:40 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc00310a4a7 0xc00310a4a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 10:41:48.997: INFO: Pod "webserver-deployment-c7997dcc8-xgrrm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xgrrm webserver-deployment-c7997dcc8- deployment-1246 /api/v1/namespaces/deployment-1246/pods/webserver-deployment-c7997dcc8-xgrrm a503a9a9-72d8-4a53-b365-32f2da6dfb3d 4227121 0 2020-01-25 10:41:35 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 46abb918-5bfb-4bb6-bd42-074d6444da74 0xc00310a5d7 0xc00310a5d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5s8b4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5s8b4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5s8b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 10:41:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 10:41:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:41:48.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1246" for this suite.

• [SLOW TEST:44.626 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":279,"completed":160,"skipped":3108,"failed":0}
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:41:50.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:42:57.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7414" for this suite.

• [SLOW TEST:66.957 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":279,"completed":161,"skipped":3114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:42:57.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 10:42:57.647: INFO: Waiting up to 5m0s for pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc" in namespace "emptydir-6700" to be "success or failure"
Jan 25 10:42:57.661: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.511426ms
Jan 25 10:42:59.670: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023640726s
Jan 25 10:43:01.680: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033495474s
Jan 25 10:43:03.692: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045384044s
Jan 25 10:43:05.701: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05385645s
Jan 25 10:43:07.715: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068538375s
STEP: Saw pod success
Jan 25 10:43:07.715: INFO: Pod "pod-846aea37-65cb-40a3-87e0-f8dde423c1fc" satisfied condition "success or failure"
Jan 25 10:43:07.720: INFO: Trying to get logs from node jerma-node pod pod-846aea37-65cb-40a3-87e0-f8dde423c1fc container test-container: 
STEP: delete the pod
Jan 25 10:43:07.899: INFO: Waiting for pod pod-846aea37-65cb-40a3-87e0-f8dde423c1fc to disappear
Jan 25 10:43:07.906: INFO: Pod pod-846aea37-65cb-40a3-87e0-f8dde423c1fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:43:07.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6700" for this suite.

• [SLOW TEST:10.492 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":162,"skipped":3141,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:43:07.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:43:08.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021" in namespace "projected-9968" to be "success or failure"
Jan 25 10:43:08.369: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021": Phase="Pending", Reason="", readiness=false. Elapsed: 158.652462ms
Jan 25 10:43:10.379: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167809716s
Jan 25 10:43:12.385: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173937209s
Jan 25 10:43:14.391: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180579072s
Jan 25 10:43:16.400: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188799094s
STEP: Saw pod success
Jan 25 10:43:16.400: INFO: Pod "downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021" satisfied condition "success or failure"
Jan 25 10:43:16.404: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021 container client-container: 
STEP: delete the pod
Jan 25 10:43:16.441: INFO: Waiting for pod downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021 to disappear
Jan 25 10:43:16.471: INFO: Pod downwardapi-volume-75dd2eec-7d8b-419c-8a53-d0e025965021 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:43:16.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9968" for this suite.

• [SLOW TEST:8.484 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":279,"completed":163,"skipped":3142,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:43:16.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9047
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9047
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9047
Jan 25 10:43:16.712: INFO: Found 0 stateful pods, waiting for 1
Jan 25 10:43:26.718: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 25 10:43:26.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:43:27.141: INFO: stderr: "I0125 10:43:26.911218    3110 log.go:172] (0xc000a6c0b0) (0xc000bb40a0) Create stream\nI0125 10:43:26.911329    3110 log.go:172] (0xc000a6c0b0) (0xc000bb40a0) Stream added, broadcasting: 1\nI0125 10:43:26.916018    3110 log.go:172] (0xc000a6c0b0) Reply frame received for 1\nI0125 10:43:26.921180    3110 log.go:172] (0xc000a6c0b0) (0xc000bb4140) Create stream\nI0125 10:43:26.921213    3110 log.go:172] (0xc000a6c0b0) (0xc000bb4140) Stream added, broadcasting: 3\nI0125 10:43:26.923553    3110 log.go:172] (0xc000a6c0b0) Reply frame received for 3\nI0125 10:43:26.923586    3110 log.go:172] (0xc000a6c0b0) (0xc0006148c0) Create stream\nI0125 10:43:26.923597    3110 log.go:172] (0xc000a6c0b0) (0xc0006148c0) Stream added, broadcasting: 5\nI0125 10:43:26.933332    3110 log.go:172] (0xc000a6c0b0) Reply frame received for 5\nI0125 10:43:27.008858    3110 log.go:172] (0xc000a6c0b0) Data frame received for 5\nI0125 10:43:27.008908    3110 log.go:172] (0xc0006148c0) (5) Data frame handling\nI0125 10:43:27.008923    3110 log.go:172] (0xc0006148c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:43:27.048182    3110 log.go:172] (0xc000a6c0b0) Data frame received for 3\nI0125 10:43:27.048287    3110 log.go:172] (0xc000bb4140) (3) Data frame handling\nI0125 10:43:27.048328    3110 log.go:172] (0xc000bb4140) (3) Data frame sent\nI0125 10:43:27.124502    3110 log.go:172] (0xc000a6c0b0) Data frame received for 1\nI0125 10:43:27.124656    3110 log.go:172] (0xc000a6c0b0) (0xc000bb4140) Stream removed, broadcasting: 3\nI0125 10:43:27.124731    3110 log.go:172] (0xc000bb40a0) (1) Data frame handling\nI0125 10:43:27.124756    3110 log.go:172] (0xc000bb40a0) (1) Data frame sent\nI0125 10:43:27.124776    3110 log.go:172] (0xc000a6c0b0) (0xc000bb40a0) Stream removed, broadcasting: 1\nI0125 10:43:27.125347    3110 log.go:172] (0xc000a6c0b0) (0xc0006148c0) Stream removed, broadcasting: 5\nI0125 10:43:27.125421    3110 log.go:172] (0xc000a6c0b0) (0xc000bb40a0) Stream removed, broadcasting: 1\nI0125 10:43:27.125436    3110 log.go:172] (0xc000a6c0b0) (0xc000bb4140) Stream removed, broadcasting: 3\nI0125 10:43:27.125449    3110 log.go:172] (0xc000a6c0b0) (0xc0006148c0) Stream removed, broadcasting: 5\n"
Jan 25 10:43:27.141: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:43:27.141: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:43:27.149: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 10:43:37.159: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:43:37.160: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:43:37.187: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999643s
Jan 25 10:43:38.198: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988746457s
Jan 25 10:43:39.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.977532483s
Jan 25 10:43:40.214: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.968314445s
Jan 25 10:43:41.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962036515s
Jan 25 10:43:42.237: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.94874383s
Jan 25 10:43:43.247: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.938695169s
Jan 25 10:43:44.254: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.929270117s
Jan 25 10:43:45.262: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.922118956s
Jan 25 10:43:46.273: INFO: Verifying statefulset ss doesn't scale past 1 for another 914.194583ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9047
Jan 25 10:43:47.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:43:47.668: INFO: stderr: "I0125 10:43:47.491702    3130 log.go:172] (0xc000af8c60) (0xc000b14280) Create stream\nI0125 10:43:47.491920    3130 log.go:172] (0xc000af8c60) (0xc000b14280) Stream added, broadcasting: 1\nI0125 10:43:47.495674    3130 log.go:172] (0xc000af8c60) Reply frame received for 1\nI0125 10:43:47.495714    3130 log.go:172] (0xc000af8c60) (0xc0009b40a0) Create stream\nI0125 10:43:47.495727    3130 log.go:172] (0xc000af8c60) (0xc0009b40a0) Stream added, broadcasting: 3\nI0125 10:43:47.497270    3130 log.go:172] (0xc000af8c60) Reply frame received for 3\nI0125 10:43:47.497294    3130 log.go:172] (0xc000af8c60) (0xc000b14320) Create stream\nI0125 10:43:47.497302    3130 log.go:172] (0xc000af8c60) (0xc000b14320) Stream added, broadcasting: 5\nI0125 10:43:47.498817    3130 log.go:172] (0xc000af8c60) Reply frame received for 5\nI0125 10:43:47.572034    3130 log.go:172] (0xc000af8c60) Data frame received for 3\nI0125 10:43:47.572295    3130 log.go:172] (0xc0009b40a0) (3) Data frame handling\nI0125 10:43:47.572310    3130 log.go:172] (0xc0009b40a0) (3) Data frame sent\nI0125 10:43:47.572339    3130 log.go:172] (0xc000af8c60) Data frame received for 5\nI0125 10:43:47.572398    3130 log.go:172] (0xc000b14320) (5) Data frame handling\nI0125 10:43:47.572412    3130 log.go:172] (0xc000b14320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:43:47.653622    3130 log.go:172] (0xc000af8c60) Data frame received for 1\nI0125 10:43:47.653711    3130 log.go:172] (0xc000b14280) (1) Data frame handling\nI0125 10:43:47.653752    3130 log.go:172] (0xc000b14280) (1) Data frame sent\nI0125 10:43:47.653812    3130 log.go:172] (0xc000af8c60) (0xc000b14280) Stream removed, broadcasting: 1\nI0125 10:43:47.654299    3130 log.go:172] (0xc000af8c60) (0xc0009b40a0) Stream removed, broadcasting: 3\nI0125 10:43:47.655026    3130 log.go:172] (0xc000af8c60) (0xc000b14320) Stream removed, broadcasting: 5\nI0125 10:43:47.655084    3130 log.go:172] (0xc000af8c60) (0xc000b14280) Stream removed, broadcasting: 1\nI0125 10:43:47.655096    3130 log.go:172] (0xc000af8c60) (0xc0009b40a0) Stream removed, broadcasting: 3\nI0125 10:43:47.655106    3130 log.go:172] (0xc000af8c60) (0xc000b14320) Stream removed, broadcasting: 5\nI0125 10:43:47.655154    3130 log.go:172] (0xc000af8c60) Go away received\n"
Jan 25 10:43:47.668: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:43:47.668: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:43:47.682: INFO: Found 1 stateful pods, waiting for 3
Jan 25 10:43:57.692: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:43:57.692: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:43:57.692: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 10:44:07.703: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:44:07.703: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:44:07.703: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 25 10:44:07.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:44:08.138: INFO: stderr: "I0125 10:44:07.928661    3149 log.go:172] (0xc000bbaf20) (0xc000b7a320) Create stream\nI0125 10:44:07.928803    3149 log.go:172] (0xc000bbaf20) (0xc000b7a320) Stream added, broadcasting: 1\nI0125 10:44:07.933994    3149 log.go:172] (0xc000bbaf20) Reply frame received for 1\nI0125 10:44:07.934039    3149 log.go:172] (0xc000bbaf20) (0xc000a86320) Create stream\nI0125 10:44:07.934059    3149 log.go:172] (0xc000bbaf20) (0xc000a86320) Stream added, broadcasting: 3\nI0125 10:44:07.936167    3149 log.go:172] (0xc000bbaf20) Reply frame received for 3\nI0125 10:44:07.936243    3149 log.go:172] (0xc000bbaf20) (0xc000a2a140) Create stream\nI0125 10:44:07.936255    3149 log.go:172] (0xc000bbaf20) (0xc000a2a140) Stream added, broadcasting: 5\nI0125 10:44:07.938195    3149 log.go:172] (0xc000bbaf20) Reply frame received for 5\nI0125 10:44:08.033794    3149 log.go:172] (0xc000bbaf20) Data frame received for 3\nI0125 10:44:08.033941    3149 log.go:172] (0xc000a86320) (3) Data frame handling\nI0125 10:44:08.033972    3149 log.go:172] (0xc000a86320) (3) Data frame sent\nI0125 10:44:08.034072    3149 log.go:172] (0xc000bbaf20) Data frame received for 5\nI0125 10:44:08.034084    3149 log.go:172] (0xc000a2a140) (5) Data frame handling\nI0125 10:44:08.034111    3149 log.go:172] (0xc000a2a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:44:08.128557    3149 log.go:172] (0xc000bbaf20) (0xc000a86320) Stream removed, broadcasting: 3\nI0125 10:44:08.128674    3149 log.go:172] (0xc000bbaf20) Data frame received for 1\nI0125 10:44:08.128719    3149 log.go:172] (0xc000b7a320) (1) Data frame handling\nI0125 10:44:08.128769    3149 log.go:172] (0xc000b7a320) (1) Data frame sent\nI0125 10:44:08.128795    3149 log.go:172] (0xc000bbaf20) (0xc000b7a320) Stream removed, broadcasting: 1\nI0125 10:44:08.128891    3149 log.go:172] (0xc000bbaf20) (0xc000a2a140) Stream removed, broadcasting: 5\nI0125 10:44:08.128988    3149 log.go:172] (0xc000bbaf20) Go away received\nI0125 10:44:08.130099    3149 log.go:172] (0xc000bbaf20) (0xc000b7a320) Stream removed, broadcasting: 1\nI0125 10:44:08.130179    3149 log.go:172] (0xc000bbaf20) (0xc000a86320) Stream removed, broadcasting: 3\nI0125 10:44:08.130196    3149 log.go:172] (0xc000bbaf20) (0xc000a2a140) Stream removed, broadcasting: 5\n"
Jan 25 10:44:08.138: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:44:08.138: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:44:08.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:44:08.657: INFO: stderr: "I0125 10:44:08.276600    3169 log.go:172] (0xc000a5ec60) (0xc000a4a500) Create stream\nI0125 10:44:08.276827    3169 log.go:172] (0xc000a5ec60) (0xc000a4a500) Stream added, broadcasting: 1\nI0125 10:44:08.299430    3169 log.go:172] (0xc000a5ec60) Reply frame received for 1\nI0125 10:44:08.299586    3169 log.go:172] (0xc000a5ec60) (0xc000609b80) Create stream\nI0125 10:44:08.299610    3169 log.go:172] (0xc000a5ec60) (0xc000609b80) Stream added, broadcasting: 3\nI0125 10:44:08.301290    3169 log.go:172] (0xc000a5ec60) Reply frame received for 3\nI0125 10:44:08.301355    3169 log.go:172] (0xc000a5ec60) (0xc000536780) Create stream\nI0125 10:44:08.301372    3169 log.go:172] (0xc000a5ec60) (0xc000536780) Stream added, broadcasting: 5\nI0125 10:44:08.308811    3169 log.go:172] (0xc000a5ec60) Reply frame received for 5\nI0125 10:44:08.421418    3169 log.go:172] (0xc000a5ec60) Data frame received for 5\nI0125 10:44:08.421477    3169 log.go:172] (0xc000536780) (5) Data frame handling\nI0125 10:44:08.421492    3169 log.go:172] (0xc000536780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:44:08.473270    3169 log.go:172] (0xc000a5ec60) Data frame received for 3\nI0125 10:44:08.473401    3169 log.go:172] (0xc000609b80) (3) Data frame handling\nI0125 10:44:08.473450    3169 log.go:172] (0xc000609b80) (3) Data frame sent\nI0125 10:44:08.622597    3169 log.go:172] (0xc000a5ec60) Data frame received for 1\nI0125 10:44:08.623129    3169 log.go:172] (0xc000a4a500) (1) Data frame handling\nI0125 10:44:08.623182    3169 log.go:172] (0xc000a4a500) (1) Data frame sent\nI0125 10:44:08.625058    3169 log.go:172] (0xc000a5ec60) (0xc000536780) Stream removed, broadcasting: 5\nI0125 10:44:08.625619    3169 log.go:172] (0xc000a5ec60) (0xc000a4a500) Stream removed, broadcasting: 1\nI0125 10:44:08.626199    3169 log.go:172] (0xc000a5ec60) (0xc000609b80) Stream removed, broadcasting: 3\nI0125 10:44:08.626272    3169 log.go:172] (0xc000a5ec60) Go away received\nI0125 10:44:08.627717    3169 log.go:172] (0xc000a5ec60) (0xc000a4a500) Stream removed, broadcasting: 1\nI0125 10:44:08.627763    3169 log.go:172] (0xc000a5ec60) (0xc000609b80) Stream removed, broadcasting: 3\nI0125 10:44:08.627787    3169 log.go:172] (0xc000a5ec60) (0xc000536780) Stream removed, broadcasting: 5\n"
Jan 25 10:44:08.658: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:44:08.658: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:44:08.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:44:09.000: INFO: stderr: "I0125 10:44:08.789530    3190 log.go:172] (0xc0006fc840) (0xc00064dea0) Create stream\nI0125 10:44:08.789650    3190 log.go:172] (0xc0006fc840) (0xc00064dea0) Stream added, broadcasting: 1\nI0125 10:44:08.797446    3190 log.go:172] (0xc0006fc840) Reply frame received for 1\nI0125 10:44:08.797484    3190 log.go:172] (0xc0006fc840) (0xc000598780) Create stream\nI0125 10:44:08.797492    3190 log.go:172] (0xc0006fc840) (0xc000598780) Stream added, broadcasting: 3\nI0125 10:44:08.799301    3190 log.go:172] (0xc0006fc840) Reply frame received for 3\nI0125 10:44:08.799343    3190 log.go:172] (0xc0006fc840) (0xc000191400) Create stream\nI0125 10:44:08.799356    3190 log.go:172] (0xc0006fc840) (0xc000191400) Stream added, broadcasting: 5\nI0125 10:44:08.800812    3190 log.go:172] (0xc0006fc840) Reply frame received for 5\nI0125 10:44:08.874947    3190 log.go:172] (0xc0006fc840) Data frame received for 5\nI0125 10:44:08.875200    3190 log.go:172] (0xc000191400) (5) Data frame handling\nI0125 10:44:08.875263    3190 log.go:172] (0xc000191400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:44:08.900245    3190 log.go:172] (0xc0006fc840) Data frame received for 3\nI0125 10:44:08.900273    3190 log.go:172] (0xc000598780) (3) Data frame handling\nI0125 10:44:08.900290    3190 log.go:172] (0xc000598780) (3) Data frame sent\nI0125 10:44:08.993433    3190 log.go:172] (0xc0006fc840) Data frame received for 1\nI0125 10:44:08.993709    3190 log.go:172] (0xc0006fc840) (0xc000598780) Stream removed, broadcasting: 3\nI0125 10:44:08.993794    3190 log.go:172] (0xc00064dea0) (1) Data frame handling\nI0125 10:44:08.993850    3190 log.go:172] (0xc00064dea0) (1) Data frame sent\nI0125 10:44:08.993905    3190 log.go:172] (0xc0006fc840) (0xc00064dea0) Stream removed, broadcasting: 1\nI0125 10:44:08.994456    3190 log.go:172] (0xc0006fc840) (0xc000191400) Stream removed, broadcasting: 5\nI0125 10:44:08.994486    3190 log.go:172] (0xc0006fc840) Go away received\nI0125 10:44:08.994643    3190 log.go:172] (0xc0006fc840) (0xc00064dea0) Stream removed, broadcasting: 1\nI0125 10:44:08.994684    3190 log.go:172] (0xc0006fc840) (0xc000598780) Stream removed, broadcasting: 3\nI0125 10:44:08.994701    3190 log.go:172] (0xc0006fc840) (0xc000191400) Stream removed, broadcasting: 5\n"
Jan 25 10:44:09.001: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:44:09.001: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:44:09.001: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:44:09.006: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 10:44:19.020: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:44:19.020: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:44:19.020: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 10:44:19.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999575s
Jan 25 10:44:20.048: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989770993s
Jan 25 10:44:21.057: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981971035s
Jan 25 10:44:22.065: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.973457711s
Jan 25 10:44:23.073: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965177653s
Jan 25 10:44:24.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.957136473s
Jan 25 10:44:25.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943885641s
Jan 25 10:44:26.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.715798501s
Jan 25 10:44:27.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.702791736s
Jan 25 10:44:28.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 691.166527ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9047
Jan 25 10:44:29.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:44:29.893: INFO: stderr: "I0125 10:44:29.711083    3211 log.go:172] (0xc000a55ad0) (0xc000b04aa0) Create stream\nI0125 10:44:29.711431    3211 log.go:172] (0xc000a55ad0) (0xc000b04aa0) Stream added, broadcasting: 1\nI0125 10:44:29.720012    3211 log.go:172] (0xc000a55ad0) Reply frame received for 1\nI0125 10:44:29.720070    3211 log.go:172] (0xc000a55ad0) (0xc000675b80) Create stream\nI0125 10:44:29.720085    3211 log.go:172] (0xc000a55ad0) (0xc000675b80) Stream added, broadcasting: 3\nI0125 10:44:29.721540    3211 log.go:172] (0xc000a55ad0) Reply frame received for 3\nI0125 10:44:29.721563    3211 log.go:172] (0xc000a55ad0) (0xc00061c780) Create stream\nI0125 10:44:29.721571    3211 log.go:172] (0xc000a55ad0) (0xc00061c780) Stream added, broadcasting: 5\nI0125 10:44:29.723607    3211 log.go:172] (0xc000a55ad0) Reply frame received for 5\nI0125 10:44:29.806523    3211 log.go:172] (0xc000a55ad0) Data frame received for 3\nI0125 10:44:29.806597    3211 log.go:172] (0xc000675b80) (3) Data frame handling\nI0125 10:44:29.806614    3211 log.go:172] (0xc000675b80) (3) Data frame sent\nI0125 10:44:29.806691    3211 log.go:172] (0xc000a55ad0) Data frame received for 5\nI0125 10:44:29.806713    3211 log.go:172] (0xc00061c780) (5) Data frame handling\nI0125 10:44:29.806740    3211 log.go:172] (0xc00061c780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:44:29.884907    3211 log.go:172] (0xc000a55ad0) (0xc000675b80) Stream removed, broadcasting: 3\nI0125 10:44:29.885034    3211 log.go:172] (0xc000a55ad0) (0xc00061c780) Stream removed, broadcasting: 5\nI0125 10:44:29.885077    3211 log.go:172] (0xc000a55ad0) Data frame received for 1\nI0125 10:44:29.885085    3211 log.go:172] (0xc000b04aa0) (1) Data frame handling\nI0125 10:44:29.885095    3211 log.go:172] (0xc000b04aa0) (1) Data frame sent\nI0125 10:44:29.885100    3211 log.go:172] (0xc000a55ad0) (0xc000b04aa0) Stream removed, broadcasting: 1\nI0125 10:44:29.885342    3211 log.go:172] (0xc000a55ad0) (0xc000b04aa0) Stream removed, broadcasting: 1\nI0125 10:44:29.885354    3211 log.go:172] (0xc000a55ad0) (0xc000675b80) Stream removed, broadcasting: 3\nI0125 10:44:29.885362    3211 log.go:172] (0xc000a55ad0) (0xc00061c780) Stream removed, broadcasting: 5\n"
Jan 25 10:44:29.894: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:44:29.894: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:44:29.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:44:30.302: INFO: stderr: "I0125 10:44:30.112963    3231 log.go:172] (0xc0005660b0) (0xc000a38140) Create stream\nI0125 10:44:30.113052    3231 log.go:172] (0xc0005660b0) (0xc000a38140) Stream added, broadcasting: 1\nI0125 10:44:30.115786    3231 log.go:172] (0xc0005660b0) Reply frame received for 1\nI0125 10:44:30.115834    3231 log.go:172] (0xc0005660b0) (0xc0006528c0) Create stream\nI0125 10:44:30.115844    3231 log.go:172] (0xc0005660b0) (0xc0006528c0) Stream added, broadcasting: 3\nI0125 10:44:30.117415    3231 log.go:172] (0xc0005660b0) Reply frame received for 3\nI0125 10:44:30.117448    3231 log.go:172] (0xc0005660b0) (0xc0006afcc0) Create stream\nI0125 10:44:30.117462    3231 log.go:172] (0xc0005660b0) (0xc0006afcc0) Stream added, broadcasting: 5\nI0125 10:44:30.119664    3231 log.go:172] (0xc0005660b0) Reply frame received for 5\nI0125 10:44:30.204445    3231 log.go:172] (0xc0005660b0) Data frame received for 5\nI0125 10:44:30.204528    3231 log.go:172] (0xc0006afcc0) (5) Data frame handling\nI0125 10:44:30.204562    3231 log.go:172] (0xc0006afcc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:44:30.204599    3231 log.go:172] (0xc0005660b0) Data frame received for 3\nI0125 10:44:30.204610    3231 log.go:172] (0xc0006528c0) (3) Data frame handling\nI0125 10:44:30.204654    3231 log.go:172] (0xc0006528c0) (3) Data frame sent\nI0125 10:44:30.289016    3231 log.go:172] (0xc0005660b0) Data frame received for 1\nI0125 10:44:30.289060    3231 log.go:172] (0xc000a38140) (1) Data frame handling\nI0125 10:44:30.289087    3231 log.go:172] (0xc000a38140) (1) Data frame sent\nI0125 10:44:30.289125    3231 log.go:172] (0xc0005660b0) (0xc000a38140) Stream removed, broadcasting: 1\nI0125 10:44:30.290625    3231 log.go:172] (0xc0005660b0) (0xc0006afcc0) Stream removed, broadcasting: 5\nI0125 10:44:30.290692    3231 log.go:172] (0xc0005660b0) (0xc0006528c0) Stream removed, broadcasting: 3\nI0125 10:44:30.290819    3231 log.go:172] (0xc0005660b0) (0xc000a38140) Stream removed, broadcasting: 1\nI0125 10:44:30.290840    3231 log.go:172] (0xc0005660b0) (0xc0006528c0) Stream removed, broadcasting: 3\nI0125 10:44:30.290860    3231 log.go:172] (0xc0005660b0) (0xc0006afcc0) Stream removed, broadcasting: 5\n"
Jan 25 10:44:30.302: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:44:30.302: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:44:30.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:44:30.681: INFO: rc: 126
Jan 25 10:44:30.682: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0125 10:44:30.562683    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8460) Create stream
I0125 10:44:30.563068    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8460) Stream added, broadcasting: 1
I0125 10:44:30.596524    3254 log.go:172] (0xc000a5f6b0) Reply frame received for 1
I0125 10:44:30.596696    3254 log.go:172] (0xc000a5f6b0) (0xc000b2c000) Create stream
I0125 10:44:30.596734    3254 log.go:172] (0xc000a5f6b0) (0xc000b2c000) Stream added, broadcasting: 3
I0125 10:44:30.601210    3254 log.go:172] (0xc000a5f6b0) Reply frame received for 3
I0125 10:44:30.601338    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8000) Create stream
I0125 10:44:30.601428    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8000) Stream added, broadcasting: 5
I0125 10:44:30.608358    3254 log.go:172] (0xc000a5f6b0) Reply frame received for 5
I0125 10:44:30.644329    3254 log.go:172] (0xc000a5f6b0) Data frame received for 3
I0125 10:44:30.644519    3254 log.go:172] (0xc000b2c000) (3) Data frame handling
I0125 10:44:30.644560    3254 log.go:172] (0xc000b2c000) (3) Data frame sent
I0125 10:44:30.653355    3254 log.go:172] (0xc000a5f6b0) Data frame received for 1
I0125 10:44:30.653413    3254 log.go:172] (0xc0009d8460) (1) Data frame handling
I0125 10:44:30.653487    3254 log.go:172] (0xc000a5f6b0) (0xc000b2c000) Stream removed, broadcasting: 3
I0125 10:44:30.653555    3254 log.go:172] (0xc0009d8460) (1) Data frame sent
I0125 10:44:30.653640    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8460) Stream removed, broadcasting: 1
I0125 10:44:30.655559    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8000) Stream removed, broadcasting: 5
I0125 10:44:30.655663    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8460) Stream removed, broadcasting: 1
I0125 10:44:30.655687    3254 log.go:172] (0xc000a5f6b0) (0xc000b2c000) Stream removed, broadcasting: 3
I0125 10:44:30.655745    3254 log.go:172] (0xc000a5f6b0) (0xc0009d8000) Stream removed, broadcasting: 5
I0125 10:44:30.656222    3254 log.go:172] (0xc000a5f6b0) Go away received
command terminated with exit code 126

error:
exit status 126
Jan 25 10:44:40.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:44:40.901: INFO: rc: 1
Jan 25 10:44:40.901: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 25 10:44:50.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:44:51.066: INFO: rc: 1
Jan 25 10:44:51.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:01.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:01.232: INFO: rc: 1
Jan 25 10:45:01.232: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:11.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:11.409: INFO: rc: 1
Jan 25 10:45:11.409: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:21.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:21.519: INFO: rc: 1
Jan 25 10:45:21.519: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:31.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:31.643: INFO: rc: 1
Jan 25 10:45:31.643: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:41.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:41.897: INFO: rc: 1
Jan 25 10:45:41.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:45:51.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:45:52.028: INFO: rc: 1
Jan 25 10:45:52.029: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:02.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:02.191: INFO: rc: 1
Jan 25 10:46:02.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:12.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:12.320: INFO: rc: 1
Jan 25 10:46:12.320: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:22.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:22.477: INFO: rc: 1
Jan 25 10:46:22.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:32.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:32.594: INFO: rc: 1
Jan 25 10:46:32.594: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:42.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:42.787: INFO: rc: 1
Jan 25 10:46:42.787: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:46:52.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:46:52.916: INFO: rc: 1
Jan 25 10:46:52.916: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:02.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:03.061: INFO: rc: 1
Jan 25 10:47:03.062: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:13.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:13.247: INFO: rc: 1
Jan 25 10:47:13.247: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:23.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:23.355: INFO: rc: 1
Jan 25 10:47:23.355: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:33.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:33.511: INFO: rc: 1
Jan 25 10:47:33.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:43.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:43.659: INFO: rc: 1
Jan 25 10:47:43.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:47:53.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:47:53.818: INFO: rc: 1
Jan 25 10:47:53.819: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:03.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:03.993: INFO: rc: 1
Jan 25 10:48:03.994: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:13.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:14.110: INFO: rc: 1
Jan 25 10:48:14.111: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:24.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:24.204: INFO: rc: 1
Jan 25 10:48:24.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:34.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:34.339: INFO: rc: 1
Jan 25 10:48:34.339: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:44.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:44.517: INFO: rc: 1
Jan 25 10:48:44.518: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:48:54.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:48:54.658: INFO: rc: 1
Jan 25 10:48:54.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:49:04.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:49:04.874: INFO: rc: 1
Jan 25 10:49:04.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:49:14.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:49:14.985: INFO: rc: 1
Jan 25 10:49:14.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:49:24.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:49:25.133: INFO: rc: 1
Jan 25 10:49:25.133: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan 25 10:49:35.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:49:35.236: INFO: rc: 1
Jan 25 10:49:35.236: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Jan 25 10:49:35.236: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 25 10:49:35.251: INFO: Deleting all statefulset in ns statefulset-9047
Jan 25 10:49:35.254: INFO: Scaling statefulset ss to 0
Jan 25 10:49:35.263: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:49:35.266: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:49:35.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9047" for this suite.

• [SLOW TEST:378.881 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":279,"completed":164,"skipped":3164,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:49:35.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:49:36.197: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 10:49:38.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:49:40.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:49:42.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 10:49:44.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546176, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 10:49:47.318: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:49:47.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2658-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:49:48.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9228" for this suite.
STEP: Destroying namespace "webhook-9228-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.266 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":279,"completed":165,"skipped":3166,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:49:48.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Jan 25 10:49:48.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6444'
Jan 25 10:49:49.170: INFO: stderr: ""
Jan 25 10:49:49.170: INFO: stdout: "pod/pause created\n"
Jan 25 10:49:49.170: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 25 10:49:49.170: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6444" to be "running and ready"
Jan 25 10:49:49.222: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 51.779718ms
Jan 25 10:49:51.228: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057781744s
Jan 25 10:49:53.297: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126958322s
Jan 25 10:49:55.304: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133801887s
Jan 25 10:49:57.314: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143714063s
Jan 25 10:49:59.326: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.155350895s
Jan 25 10:50:01.336: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.165317665s
Jan 25 10:50:03.344: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 14.173407492s
Jan 25 10:50:03.344: INFO: Pod "pause" satisfied condition "running and ready"
Jan 25 10:50:03.344: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 25 10:50:03.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6444'
Jan 25 10:50:03.508: INFO: stderr: ""
Jan 25 10:50:03.508: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 25 10:50:03.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6444'
Jan 25 10:50:03.701: INFO: stderr: ""
Jan 25 10:50:03.701: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          14s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 25 10:50:03.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6444'
Jan 25 10:50:03.808: INFO: stderr: ""
Jan 25 10:50:03.808: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 25 10:50:03.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6444'
Jan 25 10:50:03.938: INFO: stderr: ""
Jan 25 10:50:03.938: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          14s   \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Jan 25 10:50:03.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6444'
Jan 25 10:50:04.199: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 10:50:04.199: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 25 10:50:04.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6444'
Jan 25 10:50:04.357: INFO: stderr: "No resources found in kubectl-6444 namespace.\n"
Jan 25 10:50:04.358: INFO: stdout: ""
Jan 25 10:50:04.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6444 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 10:50:04.562: INFO: stderr: ""
Jan 25 10:50:04.562: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:04.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6444" for this suite.

• [SLOW TEST:15.931 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":279,"completed":166,"skipped":3208,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:04.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 25 10:50:04.733: INFO: Waiting up to 5m0s for pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2" in namespace "emptydir-467" to be "success or failure"
Jan 25 10:50:04.738: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311295ms
Jan 25 10:50:06.761: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027827407s
Jan 25 10:50:08.791: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057379837s
Jan 25 10:50:10.802: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068441153s
Jan 25 10:50:12.811: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077894905s
Jan 25 10:50:14.818: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084651157s
STEP: Saw pod success
Jan 25 10:50:14.818: INFO: Pod "pod-1fe3b67d-47af-463b-ba69-f53cad8633e2" satisfied condition "success or failure"
Jan 25 10:50:14.824: INFO: Trying to get logs from node jerma-node pod pod-1fe3b67d-47af-463b-ba69-f53cad8633e2 container test-container: 
STEP: delete the pod
Jan 25 10:50:15.123: INFO: Waiting for pod pod-1fe3b67d-47af-463b-ba69-f53cad8633e2 to disappear
Jan 25 10:50:15.134: INFO: Pod pod-1fe3b67d-47af-463b-ba69-f53cad8633e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:15.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-467" for this suite.

• [SLOW TEST:10.588 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":167,"skipped":3211,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:15.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Jan 25 10:50:15.389: INFO: Waiting up to 5m0s for pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5" in namespace "var-expansion-5568" to be "success or failure"
Jan 25 10:50:15.418: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.602912ms
Jan 25 10:50:17.427: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037861902s
Jan 25 10:50:19.438: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048823937s
Jan 25 10:50:21.448: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05807798s
Jan 25 10:50:23.457: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067590045s
Jan 25 10:50:25.468: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07844421s
STEP: Saw pod success
Jan 25 10:50:25.468: INFO: Pod "var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5" satisfied condition "success or failure"
Jan 25 10:50:25.472: INFO: Trying to get logs from node jerma-node pod var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5 container dapi-container: 
STEP: delete the pod
Jan 25 10:50:25.518: INFO: Waiting for pod var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5 to disappear
Jan 25 10:50:25.548: INFO: Pod var-expansion-3ca05578-4b6f-40e8-ade4-9d7b16e3a2d5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:25.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5568" for this suite.

• [SLOW TEST:10.412 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":279,"completed":168,"skipped":3226,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:25.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 10:50:25.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad" in namespace "projected-4659" to be "success or failure"
Jan 25 10:50:25.826: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Pending", Reason="", readiness=false. Elapsed: 102.6925ms
Jan 25 10:50:27.836: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113585901s
Jan 25 10:50:29.844: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121247425s
Jan 25 10:50:31.854: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131087359s
Jan 25 10:50:33.969: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.246149217s
Jan 25 10:50:35.975: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.251809887s
STEP: Saw pod success
Jan 25 10:50:35.975: INFO: Pod "downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad" satisfied condition "success or failure"
Jan 25 10:50:35.978: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad container client-container: 
STEP: delete the pod
Jan 25 10:50:36.026: INFO: Waiting for pod downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad to disappear
Jan 25 10:50:36.080: INFO: Pod downwardapi-volume-12a7c8d8-e6f6-4c8b-a7ed-eeeeaf937dad no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:36.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4659" for this suite.

• [SLOW TEST:10.516 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":279,"completed":169,"skipped":3234,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:36.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Jan 25 10:50:44.322: INFO: Pod pod-hostip-87a780d5-8ec8-48db-add2-0d9773b498c5 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:44.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2206" for this suite.

• [SLOW TEST:8.242 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":279,"completed":170,"skipped":3236,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:44.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0125 10:50:56.194309       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 10:50:56.194: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:56.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4426" for this suite.

• [SLOW TEST:11.880 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":279,"completed":171,"skipped":3262,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:56.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:50:56.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5876" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":279,"completed":172,"skipped":3269,"failed":0}

------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:50:56.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 10:51:34.599: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 10:51:34.617: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 10:51:36.618: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 10:51:36.625: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 10:51:38.618: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 10:51:38.624: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 10:51:40.618: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 10:51:40.626: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 10:51:42.619: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 10:51:42.630: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:51:42.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2838" for this suite.

• [SLOW TEST:46.081 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":279,"completed":173,"skipped":3269,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:51:42.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 25 10:51:42.760: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:51:57.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6432" for this suite.

• [SLOW TEST:15.099 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":279,"completed":174,"skipped":3294,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:51:57.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-8tcz
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 10:51:57.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-8tcz" in namespace "subpath-141" to be "success or failure"
Jan 25 10:51:57.963: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.625823ms
Jan 25 10:51:59.968: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025787122s
Jan 25 10:52:01.977: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035066618s
Jan 25 10:52:03.985: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042475076s
Jan 25 10:52:05.989: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 8.04677852s
Jan 25 10:52:08.000: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 10.057745616s
Jan 25 10:52:10.007: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 12.065180683s
Jan 25 10:52:12.017: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 14.074465409s
Jan 25 10:52:14.041: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 16.098810697s
Jan 25 10:52:16.047: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 18.105198243s
Jan 25 10:52:18.063: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 20.121181428s
Jan 25 10:52:20.080: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 22.137460747s
Jan 25 10:52:22.089: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 24.146610221s
Jan 25 10:52:24.513: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 26.571206338s
Jan 25 10:52:26.533: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Running", Reason="", readiness=true. Elapsed: 28.591314853s
Jan 25 10:52:28.544: INFO: Pod "pod-subpath-test-downwardapi-8tcz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.602094406s
STEP: Saw pod success
Jan 25 10:52:28.545: INFO: Pod "pod-subpath-test-downwardapi-8tcz" satisfied condition "success or failure"
Jan 25 10:52:28.549: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-8tcz container test-container-subpath-downwardapi-8tcz: 
STEP: delete the pod
Jan 25 10:52:28.704: INFO: Waiting for pod pod-subpath-test-downwardapi-8tcz to disappear
Jan 25 10:52:28.717: INFO: Pod pod-subpath-test-downwardapi-8tcz no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-8tcz
Jan 25 10:52:28.717: INFO: Deleting pod "pod-subpath-test-downwardapi-8tcz" in namespace "subpath-141"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:52:28.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-141" for this suite.

• [SLOW TEST:30.998 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":279,"completed":175,"skipped":3299,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:52:28.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 25 10:52:28.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 25 10:52:29.148: INFO: stderr: ""
Jan 25 10:52:29.148: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:52:29.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4081" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":279,"completed":176,"skipped":3307,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:52:29.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:52:29.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:52:37.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8349" for this suite.

• [SLOW TEST:8.440 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":279,"completed":177,"skipped":3320,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:52:37.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Jan 25 10:52:37.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 25 10:52:39.623: INFO: stderr: ""
Jan 25 10:52:39.623: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:52:39.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1470" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":279,"completed":178,"skipped":3322,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:52:39.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8217
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 25 10:52:39.759: INFO: Found 0 stateful pods, waiting for 3
Jan 25 10:52:49.803: INFO: Found 2 stateful pods, waiting for 3
Jan 25 10:52:59.768: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:52:59.768: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:52:59.768: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 10:53:09.768: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:53:09.768: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:53:09.768: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 10:53:09.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8217 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:53:10.287: INFO: stderr: "I0125 10:53:10.050211    4017 log.go:172] (0xc000a4e160) (0xc000ac6280) Create stream\nI0125 10:53:10.050390    4017 log.go:172] (0xc000a4e160) (0xc000ac6280) Stream added, broadcasting: 1\nI0125 10:53:10.053779    4017 log.go:172] (0xc000a4e160) Reply frame received for 1\nI0125 10:53:10.053814    4017 log.go:172] (0xc000a4e160) (0xc000acc280) Create stream\nI0125 10:53:10.053822    4017 log.go:172] (0xc000a4e160) (0xc000acc280) Stream added, broadcasting: 3\nI0125 10:53:10.054721    4017 log.go:172] (0xc000a4e160) Reply frame received for 3\nI0125 10:53:10.054750    4017 log.go:172] (0xc000a4e160) (0xc000ac6320) Create stream\nI0125 10:53:10.054756    4017 log.go:172] (0xc000a4e160) (0xc000ac6320) Stream added, broadcasting: 5\nI0125 10:53:10.055925    4017 log.go:172] (0xc000a4e160) Reply frame received for 5\nI0125 10:53:10.142115    4017 log.go:172] (0xc000a4e160) Data frame received for 5\nI0125 10:53:10.142195    4017 log.go:172] (0xc000ac6320) (5) Data frame handling\nI0125 10:53:10.142210    4017 log.go:172] (0xc000ac6320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:53:10.170839    4017 log.go:172] (0xc000a4e160) Data frame received for 3\nI0125 10:53:10.170888    4017 log.go:172] (0xc000acc280) (3) Data frame handling\nI0125 10:53:10.170903    4017 log.go:172] (0xc000acc280) (3) Data frame sent\nI0125 10:53:10.273854    4017 log.go:172] (0xc000a4e160) Data frame received for 1\nI0125 10:53:10.273906    4017 log.go:172] (0xc000ac6280) (1) Data frame handling\nI0125 10:53:10.273927    4017 log.go:172] (0xc000ac6280) (1) Data frame sent\nI0125 10:53:10.274084    4017 log.go:172] (0xc000a4e160) (0xc000ac6280) Stream removed, broadcasting: 1\nI0125 10:53:10.274180    4017 log.go:172] (0xc000a4e160) (0xc000acc280) Stream removed, broadcasting: 3\nI0125 10:53:10.274603    4017 log.go:172] (0xc000a4e160) (0xc000ac6320) Stream removed, broadcasting: 5\nI0125 10:53:10.274654    4017 log.go:172] (0xc000a4e160) (0xc000ac6280) Stream removed, broadcasting: 1\nI0125 10:53:10.274666    4017 log.go:172] (0xc000a4e160) (0xc000acc280) Stream removed, broadcasting: 3\nI0125 10:53:10.274683    4017 log.go:172] (0xc000a4e160) (0xc000ac6320) Stream removed, broadcasting: 5\n"
Jan 25 10:53:10.288: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:53:10.288: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 25 10:53:20.346: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 25 10:53:30.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8217 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:53:30.859: INFO: stderr: "I0125 10:53:30.697817    4037 log.go:172] (0xc0003c4210) (0xc000594d20) Create stream\nI0125 10:53:30.697978    4037 log.go:172] (0xc0003c4210) (0xc000594d20) Stream added, broadcasting: 1\nI0125 10:53:30.700513    4037 log.go:172] (0xc0003c4210) Reply frame received for 1\nI0125 10:53:30.700601    4037 log.go:172] (0xc0003c4210) (0xc000a5a000) Create stream\nI0125 10:53:30.700617    4037 log.go:172] (0xc0003c4210) (0xc000a5a000) Stream added, broadcasting: 3\nI0125 10:53:30.701882    4037 log.go:172] (0xc0003c4210) Reply frame received for 3\nI0125 10:53:30.701928    4037 log.go:172] (0xc0003c4210) (0xc000a26000) Create stream\nI0125 10:53:30.701940    4037 log.go:172] (0xc0003c4210) (0xc000a26000) Stream added, broadcasting: 5\nI0125 10:53:30.703085    4037 log.go:172] (0xc0003c4210) Reply frame received for 5\nI0125 10:53:30.764626    4037 log.go:172] (0xc0003c4210) Data frame received for 3\nI0125 10:53:30.764678    4037 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0125 10:53:30.764695    4037 log.go:172] (0xc000a5a000) (3) Data frame sent\nI0125 10:53:30.764744    4037 log.go:172] (0xc0003c4210) Data frame received for 5\nI0125 10:53:30.764755    4037 log.go:172] (0xc000a26000) (5) Data frame handling\nI0125 10:53:30.764765    4037 log.go:172] (0xc000a26000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:53:30.846071    4037 log.go:172] (0xc0003c4210) Data frame received for 1\nI0125 10:53:30.846124    4037 log.go:172] (0xc0003c4210) (0xc000a26000) Stream removed, broadcasting: 5\nI0125 10:53:30.846157    4037 log.go:172] (0xc000594d20) (1) Data frame handling\nI0125 10:53:30.846183    4037 log.go:172] (0xc000594d20) (1) Data frame sent\nI0125 10:53:30.846220    4037 log.go:172] (0xc0003c4210) (0xc000a5a000) Stream removed, broadcasting: 3\nI0125 10:53:30.846248    4037 log.go:172] (0xc0003c4210) (0xc000594d20) Stream removed, broadcasting: 1\nI0125 10:53:30.846263    4037 log.go:172] (0xc0003c4210) Go away received\nI0125 10:53:30.847274    4037 log.go:172] (0xc0003c4210) (0xc000594d20) Stream removed, broadcasting: 1\nI0125 10:53:30.847293    4037 log.go:172] (0xc0003c4210) (0xc000a5a000) Stream removed, broadcasting: 3\nI0125 10:53:30.847308    4037 log.go:172] (0xc0003c4210) (0xc000a26000) Stream removed, broadcasting: 5\n"
Jan 25 10:53:30.860: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:53:30.860: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:53:40.910: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:53:40.911: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:53:40.911: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:53:40.911: INFO: Waiting for Pod statefulset-8217/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:53:50.928: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:53:50.928: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:53:50.928: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:54:00.965: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:54:00.966: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:54:00.966: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:54:10.929: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:54:10.929: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 10:54:20.928: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 25 10:54:30.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8217 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 10:54:31.235: INFO: stderr: "I0125 10:54:31.053318    4058 log.go:172] (0xc000bf8e70) (0xc000c543c0) Create stream\nI0125 10:54:31.053423    4058 log.go:172] (0xc000bf8e70) (0xc000c543c0) Stream added, broadcasting: 1\nI0125 10:54:31.055789    4058 log.go:172] (0xc000bf8e70) Reply frame received for 1\nI0125 10:54:31.055818    4058 log.go:172] (0xc000bf8e70) (0xc000cc4140) Create stream\nI0125 10:54:31.055825    4058 log.go:172] (0xc000bf8e70) (0xc000cc4140) Stream added, broadcasting: 3\nI0125 10:54:31.057166    4058 log.go:172] (0xc000bf8e70) Reply frame received for 3\nI0125 10:54:31.057182    4058 log.go:172] (0xc000bf8e70) (0xc000c54460) Create stream\nI0125 10:54:31.057189    4058 log.go:172] (0xc000bf8e70) (0xc000c54460) Stream added, broadcasting: 5\nI0125 10:54:31.058839    4058 log.go:172] (0xc000bf8e70) Reply frame received for 5\nI0125 10:54:31.120318    4058 log.go:172] (0xc000bf8e70) Data frame received for 5\nI0125 10:54:31.120513    4058 log.go:172] (0xc000c54460) (5) Data frame handling\nI0125 10:54:31.120533    4058 log.go:172] (0xc000c54460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 10:54:31.155561    4058 log.go:172] (0xc000bf8e70) Data frame received for 3\nI0125 10:54:31.155581    4058 log.go:172] (0xc000cc4140) (3) Data frame handling\nI0125 10:54:31.155600    4058 log.go:172] (0xc000cc4140) (3) Data frame sent\nI0125 10:54:31.225376    4058 log.go:172] (0xc000bf8e70) Data frame received for 1\nI0125 10:54:31.225427    4058 log.go:172] (0xc000c543c0) (1) Data frame handling\nI0125 10:54:31.225450    4058 log.go:172] (0xc000c543c0) (1) Data frame sent\nI0125 10:54:31.225466    4058 log.go:172] (0xc000bf8e70) (0xc000c543c0) Stream removed, broadcasting: 1\nI0125 10:54:31.225756    4058 log.go:172] (0xc000bf8e70) (0xc000cc4140) Stream removed, broadcasting: 3\nI0125 10:54:31.226222    4058 log.go:172] (0xc000bf8e70) (0xc000c54460) Stream removed, broadcasting: 5\nI0125 10:54:31.226260    4058 log.go:172] (0xc000bf8e70) (0xc000c543c0) Stream removed, broadcasting: 1\nI0125 10:54:31.226277    4058 log.go:172] (0xc000bf8e70) (0xc000cc4140) Stream removed, broadcasting: 3\nI0125 10:54:31.226286    4058 log.go:172] (0xc000bf8e70) (0xc000c54460) Stream removed, broadcasting: 5\n"
Jan 25 10:54:31.235: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 10:54:31.235: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 10:54:41.285: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 25 10:54:51.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8217 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 10:54:51.713: INFO: stderr: "I0125 10:54:51.519270    4078 log.go:172] (0xc0009ace70) (0xc000abe8c0) Create stream\nI0125 10:54:51.519411    4078 log.go:172] (0xc0009ace70) (0xc000abe8c0) Stream added, broadcasting: 1\nI0125 10:54:51.521770    4078 log.go:172] (0xc0009ace70) Reply frame received for 1\nI0125 10:54:51.521798    4078 log.go:172] (0xc0009ace70) (0xc0009941e0) Create stream\nI0125 10:54:51.521810    4078 log.go:172] (0xc0009ace70) (0xc0009941e0) Stream added, broadcasting: 3\nI0125 10:54:51.522952    4078 log.go:172] (0xc0009ace70) Reply frame received for 3\nI0125 10:54:51.522980    4078 log.go:172] (0xc0009ace70) (0xc000abe960) Create stream\nI0125 10:54:51.522999    4078 log.go:172] (0xc0009ace70) (0xc000abe960) Stream added, broadcasting: 5\nI0125 10:54:51.524157    4078 log.go:172] (0xc0009ace70) Reply frame received for 5\nI0125 10:54:51.593653    4078 log.go:172] (0xc0009ace70) Data frame received for 3\nI0125 10:54:51.593722    4078 log.go:172] (0xc0009941e0) (3) Data frame handling\nI0125 10:54:51.593740    4078 log.go:172] (0xc0009941e0) (3) Data frame sent\nI0125 10:54:51.593778    4078 log.go:172] (0xc0009ace70) Data frame received for 5\nI0125 10:54:51.593787    4078 log.go:172] (0xc000abe960) (5) Data frame handling\nI0125 10:54:51.593800    4078 log.go:172] (0xc000abe960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 10:54:51.704035    4078 log.go:172] (0xc0009ace70) Data frame received for 1\nI0125 10:54:51.704100    4078 log.go:172] (0xc000abe8c0) (1) Data frame handling\nI0125 10:54:51.704186    4078 log.go:172] (0xc000abe8c0) (1) Data frame sent\nI0125 10:54:51.704216    4078 log.go:172] (0xc0009ace70) (0xc000abe8c0) Stream removed, broadcasting: 1\nI0125 10:54:51.705246    4078 log.go:172] (0xc0009ace70) (0xc0009941e0) Stream removed, broadcasting: 3\nI0125 10:54:51.705334    4078 log.go:172] (0xc0009ace70) (0xc000abe960) Stream removed, broadcasting: 5\nI0125 10:54:51.705378    4078 log.go:172] (0xc0009ace70) Go away received\nI0125 10:54:51.705437    4078 log.go:172] (0xc0009ace70) (0xc000abe8c0) Stream removed, broadcasting: 1\nI0125 10:54:51.705461    4078 log.go:172] (0xc0009ace70) (0xc0009941e0) Stream removed, broadcasting: 3\nI0125 10:54:51.705481    4078 log.go:172] (0xc0009ace70) (0xc000abe960) Stream removed, broadcasting: 5\n"
Jan 25 10:54:51.714: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 10:54:51.714: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 10:55:01.751: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:55:01.751: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:01.751: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:01.751: INFO: Waiting for Pod statefulset-8217/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:11.770: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:55:11.771: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:11.771: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:21.763: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:55:21.763: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:21.763: INFO: Waiting for Pod statefulset-8217/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:31.778: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
Jan 25 10:55:31.778: INFO: Waiting for Pod statefulset-8217/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 25 10:55:41.840: INFO: Waiting for StatefulSet statefulset-8217/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 25 10:55:51.767: INFO: Deleting all statefulset in ns statefulset-8217
Jan 25 10:55:51.773: INFO: Scaling statefulset ss2 to 0
Jan 25 10:56:31.814: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 10:56:31.818: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:56:31.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8217" for this suite.

• [SLOW TEST:232.309 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":279,"completed":179,"skipped":3327,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:56:31.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Jan 25 10:56:32.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5404 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 25 10:56:44.688: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0125 10:56:42.841059    4098 log.go:172] (0xc0009d31e0) (0xc00069dae0) Create stream\nI0125 10:56:42.841118    4098 log.go:172] (0xc0009d31e0) (0xc00069dae0) Stream added, broadcasting: 1\nI0125 10:56:42.844570    4098 log.go:172] (0xc0009d31e0) Reply frame received for 1\nI0125 10:56:42.844688    4098 log.go:172] (0xc0009d31e0) (0xc000752000) Create stream\nI0125 10:56:42.844722    4098 log.go:172] (0xc0009d31e0) (0xc000752000) Stream added, broadcasting: 3\nI0125 10:56:42.847252    4098 log.go:172] (0xc0009d31e0) Reply frame received for 3\nI0125 10:56:42.847285    4098 log.go:172] (0xc0009d31e0) (0xc0007520a0) Create stream\nI0125 10:56:42.847294    4098 log.go:172] (0xc0009d31e0) (0xc0007520a0) Stream added, broadcasting: 5\nI0125 10:56:42.849978    4098 log.go:172] (0xc0009d31e0) Reply frame received for 5\nI0125 10:56:42.850025    4098 log.go:172] (0xc0009d31e0) (0xc000752140) Create stream\nI0125 10:56:42.850039    4098 log.go:172] (0xc0009d31e0) (0xc000752140) Stream added, broadcasting: 7\nI0125 10:56:42.855479    4098 log.go:172] (0xc0009d31e0) Reply frame received for 7\nI0125 10:56:42.855719    4098 log.go:172] (0xc000752000) (3) Writing data frame\nI0125 10:56:42.855927    4098 log.go:172] (0xc000752000) (3) Writing data frame\nI0125 10:56:42.865912    4098 log.go:172] (0xc0009d31e0) Data frame received for 5\nI0125 10:56:42.865940    4098 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0125 10:56:42.865962    4098 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0125 10:56:42.867936    4098 log.go:172] (0xc0009d31e0) Data frame received for 5\nI0125 10:56:42.867954    4098 log.go:172] (0xc0007520a0) (5) Data frame handling\nI0125 10:56:42.867961    4098 log.go:172] (0xc0007520a0) (5) Data frame sent\nI0125 10:56:44.630157    4098 log.go:172] (0xc0009d31e0) (0xc000752000) Stream removed, broadcasting: 3\nI0125 10:56:44.630491    4098 log.go:172] (0xc0009d31e0) Data frame received for 1\nI0125 10:56:44.630524    4098 log.go:172] (0xc00069dae0) (1) Data frame handling\nI0125 10:56:44.630710    4098 log.go:172] (0xc00069dae0) (1) Data frame sent\nI0125 10:56:44.630732    4098 log.go:172] (0xc0009d31e0) (0xc00069dae0) Stream removed, broadcasting: 1\nI0125 10:56:44.630858    4098 log.go:172] (0xc0009d31e0) (0xc0007520a0) Stream removed, broadcasting: 5\nI0125 10:56:44.630932    4098 log.go:172] (0xc0009d31e0) (0xc000752140) Stream removed, broadcasting: 7\nI0125 10:56:44.631179    4098 log.go:172] (0xc0009d31e0) Go away received\nI0125 10:56:44.631567    4098 log.go:172] (0xc0009d31e0) (0xc00069dae0) Stream removed, broadcasting: 1\nI0125 10:56:44.631633    4098 log.go:172] (0xc0009d31e0) (0xc000752000) Stream removed, broadcasting: 3\nI0125 10:56:44.631645    4098 log.go:172] (0xc0009d31e0) (0xc0007520a0) Stream removed, broadcasting: 5\nI0125 10:56:44.631681    4098 log.go:172] (0xc0009d31e0) (0xc000752140) Stream removed, broadcasting: 7\n"
Jan 25 10:56:44.689: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:56:46.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5404" for this suite.

• [SLOW TEST:14.779 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":279,"completed":180,"skipped":3340,"failed":0}
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:56:46.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Jan 25 10:56:46.905: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9065" to be "success or failure"
Jan 25 10:56:46.998: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 92.648459ms
Jan 25 10:56:49.003: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097781773s
Jan 25 10:56:51.012: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106904313s
Jan 25 10:56:53.020: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114900633s
Jan 25 10:56:55.027: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122332667s
Jan 25 10:56:57.036: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130788498s
Jan 25 10:56:59.109: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.203558276s
Jan 25 10:57:01.114: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209063031s
Jan 25 10:57:03.158: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.253137864s
STEP: Saw pod success
Jan 25 10:57:03.158: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 25 10:57:03.165: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 25 10:57:03.370: INFO: Waiting for pod pod-host-path-test to disappear
Jan 25 10:57:03.409: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:57:03.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9065" for this suite.

• [SLOW TEST:16.692 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":181,"skipped":3345,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:57:03.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:57:03.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:57:13.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5150" for this suite.

• [SLOW TEST:10.392 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":279,"completed":182,"skipped":3354,"failed":0}
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:57:13.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 10:57:13.991: INFO: Waiting up to 5m0s for pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df" in namespace "downward-api-2968" to be "success or failure"
Jan 25 10:57:14.009: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Pending", Reason="", readiness=false. Elapsed: 17.269591ms
Jan 25 10:57:16.016: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024874064s
Jan 25 10:57:18.025: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033750181s
Jan 25 10:57:20.038: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046318316s
Jan 25 10:57:22.046: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Running", Reason="", readiness=true. Elapsed: 8.054409178s
Jan 25 10:57:24.054: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062334829s
STEP: Saw pod success
Jan 25 10:57:24.054: INFO: Pod "downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df" satisfied condition "success or failure"
Jan 25 10:57:24.060: INFO: Trying to get logs from node jerma-node pod downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df container dapi-container: 
STEP: delete the pod
Jan 25 10:57:24.118: INFO: Waiting for pod downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df to disappear
Jan 25 10:57:24.141: INFO: Pod downward-api-33ab3c6b-cf25-44bf-9892-054a7aa2d1df no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:57:24.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2968" for this suite.

• [SLOW TEST:10.348 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":279,"completed":183,"skipped":3354,"failed":0}
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:57:24.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Jan 25 10:57:24.321: INFO: Waiting up to 5m0s for pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4" in namespace "var-expansion-1036" to be "success or failure"
Jan 25 10:57:24.332: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.073917ms
Jan 25 10:57:26.340: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019312884s
Jan 25 10:57:28.354: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033345625s
Jan 25 10:57:30.365: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0436414s
Jan 25 10:57:32.374: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052489953s
Jan 25 10:57:34.382: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061262336s
STEP: Saw pod success
Jan 25 10:57:34.383: INFO: Pod "var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4" satisfied condition "success or failure"
Jan 25 10:57:34.385: INFO: Trying to get logs from node jerma-node pod var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4 container dapi-container: 
STEP: delete the pod
Jan 25 10:57:34.452: INFO: Waiting for pod var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4 to disappear
Jan 25 10:57:34.457: INFO: Pod var-expansion-48230425-dc5a-4410-842a-22d39da6dbc4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:57:34.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1036" for this suite.

• [SLOW TEST:10.302 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":279,"completed":184,"skipped":3359,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:57:34.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 10:57:34.635: INFO: Create a RollingUpdate DaemonSet
Jan 25 10:57:34.644: INFO: Check that daemon pods launch on every node of the cluster
Jan 25 10:57:34.772: INFO: Number of nodes with available pods: 0
Jan 25 10:57:34.773: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:36.597: INFO: Number of nodes with available pods: 0
Jan 25 10:57:36.597: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:37.373: INFO: Number of nodes with available pods: 0
Jan 25 10:57:37.373: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:38.183: INFO: Number of nodes with available pods: 0
Jan 25 10:57:38.183: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:38.841: INFO: Number of nodes with available pods: 0
Jan 25 10:57:38.841: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:39.875: INFO: Number of nodes with available pods: 0
Jan 25 10:57:39.875: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:42.533: INFO: Number of nodes with available pods: 0
Jan 25 10:57:42.533: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:43.081: INFO: Number of nodes with available pods: 0
Jan 25 10:57:43.081: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:44.473: INFO: Number of nodes with available pods: 0
Jan 25 10:57:44.474: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:44.790: INFO: Number of nodes with available pods: 0
Jan 25 10:57:44.790: INFO: Node jerma-node is running more than one daemon pod
Jan 25 10:57:45.792: INFO: Number of nodes with available pods: 1
Jan 25 10:57:45.793: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 10:57:46.787: INFO: Number of nodes with available pods: 2
Jan 25 10:57:46.787: INFO: Number of running nodes: 2, number of available pods: 2
Jan 25 10:57:46.787: INFO: Update the DaemonSet to trigger a rollout
Jan 25 10:57:46.801: INFO: Updating DaemonSet daemon-set
Jan 25 10:58:02.828: INFO: Roll back the DaemonSet before rollout is complete
Jan 25 10:58:02.867: INFO: Updating DaemonSet daemon-set
Jan 25 10:58:02.867: INFO: Make sure DaemonSet rollback is complete
Jan 25 10:58:02.874: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:02.874: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:03.951: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:03.951: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:04.897: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:04.897: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:05.904: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:05.904: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:06.894: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:06.894: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:07.935: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:07.935: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:08.897: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:08.897: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:09.896: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:09.896: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:10.905: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:10.906: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:11.899: INFO: Wrong image for pod: daemon-set-2g996. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 10:58:11.899: INFO: Pod daemon-set-2g996 is not available
Jan 25 10:58:12.902: INFO: Pod daemon-set-9vqmp is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1828, will wait for the garbage collector to delete the pods
Jan 25 10:58:12.998: INFO: Deleting DaemonSet.extensions daemon-set took: 24.349203ms
Jan 25 10:58:13.299: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.09023ms
Jan 25 10:58:23.173: INFO: Number of nodes with available pods: 0
Jan 25 10:58:23.173: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 10:58:23.179: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1828/daemonsets","resourceVersion":"4231005"},"items":null}

Jan 25 10:58:23.181: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1828/pods","resourceVersion":"4231005"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:58:23.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1828" for this suite.

• [SLOW TEST:48.737 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":279,"completed":185,"skipped":3363,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:58:23.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:58:23.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3441" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":279,"completed":186,"skipped":3377,"failed":0}
SSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:58:23.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-c7e6058e-f57d-46bc-b59e-b446a96b3d59
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-c7e6058e-f57d-46bc-b59e-b446a96b3d59
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:59:42.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9162" for this suite.

• [SLOW TEST:79.262 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":187,"skipped":3381,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:59:42.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-aca61051-f48d-4627-8d26-aebd27a8b328
STEP: Creating a pod to test consume secrets
Jan 25 10:59:42.992: INFO: Waiting up to 5m0s for pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53" in namespace "secrets-4535" to be "success or failure"
Jan 25 10:59:43.084: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 91.758118ms
Jan 25 10:59:45.091: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098825324s
Jan 25 10:59:47.100: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107109987s
Jan 25 10:59:49.107: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113929729s
Jan 25 10:59:51.116: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123481717s
Jan 25 10:59:53.124: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 10.131588604s
Jan 25 10:59:55.224: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231610044s
Jan 25 10:59:57.238: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.24564826s
STEP: Saw pod success
Jan 25 10:59:57.239: INFO: Pod "pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53" satisfied condition "success or failure"
Jan 25 10:59:57.246: INFO: Trying to get logs from node jerma-node pod pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53 container secret-env-test: 
STEP: delete the pod
Jan 25 10:59:57.316: INFO: Waiting for pod pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53 to disappear
Jan 25 10:59:57.324: INFO: Pod pod-secrets-2e5ae191-2817-40f7-ae79-b8c1d607ad53 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 10:59:57.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4535" for this suite.

• [SLOW TEST:14.534 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":279,"completed":188,"skipped":3385,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 10:59:57.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 10:59:58.762: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:00:00.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:00:02.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:00:04.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:00:06.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546798, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:00:09.902: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 25 11:00:10.030: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:00:10.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6533" for this suite.
STEP: Destroying namespace "webhook-6533-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.081 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":279,"completed":189,"skipped":3390,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:00:10.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 11:00:10.627: INFO: Waiting up to 5m0s for pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e" in namespace "emptydir-8998" to be "success or failure"
Jan 25 11:00:10.658: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.852558ms
Jan 25 11:00:12.666: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039340317s
Jan 25 11:00:14.671: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044406086s
Jan 25 11:00:16.683: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056530012s
Jan 25 11:00:18.695: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068597231s
Jan 25 11:00:20.706: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.079664605s
Jan 25 11:00:22.714: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087276803s
STEP: Saw pod success
Jan 25 11:00:22.714: INFO: Pod "pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e" satisfied condition "success or failure"
Jan 25 11:00:22.719: INFO: Trying to get logs from node jerma-node pod pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e container test-container: 
STEP: delete the pod
Jan 25 11:00:22.793: INFO: Waiting for pod pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e to disappear
Jan 25 11:00:22.799: INFO: Pod pod-2a0a86f8-bcc1-4880-998f-35cc88643f3e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:00:22.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8998" for this suite.

• [SLOW TEST:12.401 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":190,"skipped":3399,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:00:22.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:00:22.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df" in namespace "projected-808" to be "success or failure"
Jan 25 11:00:23.003: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 39.36609ms
Jan 25 11:00:25.013: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049575586s
Jan 25 11:00:27.025: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062137279s
Jan 25 11:00:29.032: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068696361s
Jan 25 11:00:31.058: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09455972s
Jan 25 11:00:33.069: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105489435s
STEP: Saw pod success
Jan 25 11:00:33.069: INFO: Pod "downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df" satisfied condition "success or failure"
Jan 25 11:00:33.075: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df container client-container: 
STEP: delete the pod
Jan 25 11:00:33.176: INFO: Waiting for pod downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df to disappear
Jan 25 11:00:33.190: INFO: Pod downwardapi-volume-2298db4f-89ef-4cba-95b8-417aa612f3df no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:00:33.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-808" for this suite.

• [SLOW TEST:10.380 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":279,"completed":191,"skipped":3415,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:00:33.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:00:33.320: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10" in namespace "downward-api-8129" to be "success or failure"
Jan 25 11:00:33.339: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10": Phase="Pending", Reason="", readiness=false. Elapsed: 18.822726ms
Jan 25 11:00:35.348: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027866487s
Jan 25 11:00:37.356: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035870092s
Jan 25 11:00:39.364: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043670674s
Jan 25 11:00:41.374: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053372837s
STEP: Saw pod success
Jan 25 11:00:41.374: INFO: Pod "downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10" satisfied condition "success or failure"
Jan 25 11:00:41.379: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10 container client-container: 
STEP: delete the pod
Jan 25 11:00:41.418: INFO: Waiting for pod downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10 to disappear
Jan 25 11:00:41.426: INFO: Pod downwardapi-volume-fa0676f7-d3aa-44e3-876c-4db9b6f0fb10 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:00:41.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8129" for this suite.

• [SLOW TEST:8.247 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":279,"completed":192,"skipped":3419,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:00:41.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-3bb1e6b9-bc5e-4d07-b998-5cd8c065dcab
STEP: Creating a pod to test consume configMaps
Jan 25 11:00:41.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63" in namespace "configmap-7422" to be "success or failure"
Jan 25 11:00:41.736: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 54.261837ms
Jan 25 11:00:43.744: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062385278s
Jan 25 11:00:45.752: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07020735s
Jan 25 11:00:47.760: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078199452s
Jan 25 11:00:49.773: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091971428s
Jan 25 11:00:51.799: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.117778129s
Jan 25 11:00:53.812: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.130441436s
STEP: Saw pod success
Jan 25 11:00:53.812: INFO: Pod "pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63" satisfied condition "success or failure"
Jan 25 11:00:53.815: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63 container configmap-volume-test: 
STEP: delete the pod
Jan 25 11:00:53.873: INFO: Waiting for pod pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63 to disappear
Jan 25 11:00:53.890: INFO: Pod pod-configmaps-4425735d-1d68-4ca9-8dcf-2104f5489f63 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:00:53.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7422" for this suite.

• [SLOW TEST:12.507 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":279,"completed":193,"skipped":3431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:00:53.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:01:11.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9016" for this suite.

• [SLOW TEST:17.270 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":279,"completed":194,"skipped":3469,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:01:11.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 25 11:01:11.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8635'
Jan 25 11:01:11.933: INFO: stderr: ""
Jan 25 11:01:11.933: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 11:01:11.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:12.165: INFO: stderr: ""
Jan 25 11:01:12.165: INFO: stdout: "update-demo-nautilus-sd8cw update-demo-nautilus-svz7d "
Jan 25 11:01:12.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:12.282: INFO: stderr: ""
Jan 25 11:01:12.282: INFO: stdout: ""
Jan 25 11:01:12.282: INFO: update-demo-nautilus-sd8cw is created but not running
Jan 25 11:01:17.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:17.445: INFO: stderr: ""
Jan 25 11:01:17.446: INFO: stdout: "update-demo-nautilus-sd8cw update-demo-nautilus-svz7d "
Jan 25 11:01:17.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:17.544: INFO: stderr: ""
Jan 25 11:01:17.544: INFO: stdout: ""
Jan 25 11:01:17.544: INFO: update-demo-nautilus-sd8cw is created but not running
Jan 25 11:01:22.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:22.667: INFO: stderr: ""
Jan 25 11:01:22.667: INFO: stdout: "update-demo-nautilus-sd8cw update-demo-nautilus-svz7d "
Jan 25 11:01:22.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:22.752: INFO: stderr: ""
Jan 25 11:01:22.752: INFO: stdout: ""
Jan 25 11:01:22.752: INFO: update-demo-nautilus-sd8cw is created but not running
Jan 25 11:01:27.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:27.887: INFO: stderr: ""
Jan 25 11:01:27.888: INFO: stdout: "update-demo-nautilus-sd8cw update-demo-nautilus-svz7d "
Jan 25 11:01:27.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:28.088: INFO: stderr: ""
Jan 25 11:01:28.088: INFO: stdout: "true"
Jan 25 11:01:28.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:28.196: INFO: stderr: ""
Jan 25 11:01:28.196: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:01:28.196: INFO: validating pod update-demo-nautilus-sd8cw
Jan 25 11:01:28.203: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:01:28.204: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:01:28.204: INFO: update-demo-nautilus-sd8cw is verified up and running
Jan 25 11:01:28.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svz7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:28.318: INFO: stderr: ""
Jan 25 11:01:28.318: INFO: stdout: "true"
Jan 25 11:01:28.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svz7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:28.398: INFO: stderr: ""
Jan 25 11:01:28.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:01:28.398: INFO: validating pod update-demo-nautilus-svz7d
Jan 25 11:01:28.420: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:01:28.420: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:01:28.420: INFO: update-demo-nautilus-svz7d is verified up and running
STEP: scaling down the replication controller
Jan 25 11:01:28.425: INFO: scanned /root for discovery docs: 
Jan 25 11:01:28.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8635'
Jan 25 11:01:29.659: INFO: stderr: ""
Jan 25 11:01:29.660: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 11:01:29.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:29.819: INFO: stderr: ""
Jan 25 11:01:29.819: INFO: stdout: "update-demo-nautilus-sd8cw update-demo-nautilus-svz7d "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 25 11:01:34.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:34.912: INFO: stderr: ""
Jan 25 11:01:34.912: INFO: stdout: "update-demo-nautilus-sd8cw "
Jan 25 11:01:34.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:35.040: INFO: stderr: ""
Jan 25 11:01:35.040: INFO: stdout: "true"
Jan 25 11:01:35.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:35.132: INFO: stderr: ""
Jan 25 11:01:35.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:01:35.132: INFO: validating pod update-demo-nautilus-sd8cw
Jan 25 11:01:35.138: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:01:35.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:01:35.139: INFO: update-demo-nautilus-sd8cw is verified up and running
STEP: scaling up the replication controller
Jan 25 11:01:35.143: INFO: scanned /root for discovery docs: 
Jan 25 11:01:35.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8635'
Jan 25 11:01:36.377: INFO: stderr: ""
Jan 25 11:01:36.378: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 11:01:36.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:36.593: INFO: stderr: ""
Jan 25 11:01:36.594: INFO: stdout: "update-demo-nautilus-ksbqf update-demo-nautilus-sd8cw "
Jan 25 11:01:36.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksbqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:36.677: INFO: stderr: ""
Jan 25 11:01:36.678: INFO: stdout: ""
Jan 25 11:01:36.678: INFO: update-demo-nautilus-ksbqf is created but not running
Jan 25 11:01:41.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:41.811: INFO: stderr: ""
Jan 25 11:01:41.811: INFO: stdout: "update-demo-nautilus-ksbqf update-demo-nautilus-sd8cw "
Jan 25 11:01:41.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksbqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:41.957: INFO: stderr: ""
Jan 25 11:01:41.957: INFO: stdout: ""
Jan 25 11:01:41.957: INFO: update-demo-nautilus-ksbqf is created but not running
Jan 25 11:01:46.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8635'
Jan 25 11:01:47.139: INFO: stderr: ""
Jan 25 11:01:47.140: INFO: stdout: "update-demo-nautilus-ksbqf update-demo-nautilus-sd8cw "
Jan 25 11:01:47.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksbqf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:47.238: INFO: stderr: ""
Jan 25 11:01:47.238: INFO: stdout: "true"
Jan 25 11:01:47.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ksbqf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:47.374: INFO: stderr: ""
Jan 25 11:01:47.374: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:01:47.374: INFO: validating pod update-demo-nautilus-ksbqf
Jan 25 11:01:47.380: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:01:47.380: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:01:47.380: INFO: update-demo-nautilus-ksbqf is verified up and running
Jan 25 11:01:47.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:47.501: INFO: stderr: ""
Jan 25 11:01:47.501: INFO: stdout: "true"
Jan 25 11:01:47.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sd8cw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8635'
Jan 25 11:01:47.652: INFO: stderr: ""
Jan 25 11:01:47.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 11:01:47.653: INFO: validating pod update-demo-nautilus-sd8cw
Jan 25 11:01:47.658: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 11:01:47.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 11:01:47.658: INFO: update-demo-nautilus-sd8cw is verified up and running
STEP: using delete to clean up resources
Jan 25 11:01:47.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8635'
Jan 25 11:01:47.765: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 11:01:47.765: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 25 11:01:47.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8635'
Jan 25 11:01:47.892: INFO: stderr: "No resources found in kubectl-8635 namespace.\n"
Jan 25 11:01:47.892: INFO: stdout: ""
Jan 25 11:01:47.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8635 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 11:01:48.065: INFO: stderr: ""
Jan 25 11:01:48.065: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:01:48.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8635" for this suite.

• [SLOW TEST:36.892 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":279,"completed":195,"skipped":3501,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:01:48.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:01:59.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6810" for this suite.

• [SLOW TEST:11.275 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":279,"completed":196,"skipped":3506,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:01:59.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:01:59.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4513'
Jan 25 11:01:59.996: INFO: stderr: ""
Jan 25 11:01:59.996: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 25 11:01:59.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4513'
Jan 25 11:02:00.355: INFO: stderr: ""
Jan 25 11:02:00.355: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 25 11:02:01.365: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:01.365: INFO: Found 0 / 1
Jan 25 11:02:02.361: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:02.362: INFO: Found 0 / 1
Jan 25 11:02:03.365: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:03.366: INFO: Found 0 / 1
Jan 25 11:02:04.363: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:04.363: INFO: Found 0 / 1
Jan 25 11:02:05.366: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:05.367: INFO: Found 0 / 1
Jan 25 11:02:06.366: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:06.366: INFO: Found 0 / 1
Jan 25 11:02:07.378: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:07.379: INFO: Found 0 / 1
Jan 25 11:02:08.367: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:08.367: INFO: Found 1 / 1
Jan 25 11:02:08.367: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 11:02:08.373: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 11:02:08.373: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 11:02:08.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-7f84c --namespace=kubectl-4513'
Jan 25 11:02:08.594: INFO: stderr: ""
Jan 25 11:02:08.595: INFO: stdout: "Name:         agnhost-master-7f84c\nNamespace:    kubectl-4513\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sat, 25 Jan 2020 11:02:00 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://9761807a95fd0a8d065a2e3d4ff4cb9096d110c4e6164cd4f94c97f2a807b903\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 25 Jan 2020 11:02:06 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-47fn5 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-47fn5:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-47fn5\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-4513/agnhost-master-7f84c to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 25 11:02:08.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4513'
Jan 25 11:02:08.752: INFO: stderr: ""
Jan 25 11:02:08.752: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-4513\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: agnhost-master-7f84c\n"
Jan 25 11:02:08.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4513'
Jan 25 11:02:08.856: INFO: stderr: ""
Jan 25 11:02:08.856: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-4513\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.155.100\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 25 11:02:08.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 25 11:02:09.082: INFO: stderr: ""
Jan 25 11:02:09.082: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sat, 25 Jan 2020 11:01:59 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 25 Jan 2020 10:58:03 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 25 Jan 2020 10:58:03 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 25 Jan 2020 10:58:03 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 25 Jan 2020 10:58:03 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         20d\n  kubectl-4513                agnhost-master-7f84c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 25 11:02:09.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4513'
Jan 25 11:02:09.165: INFO: stderr: ""
Jan 25 11:02:09.166: INFO: stdout: "Name:         kubectl-4513\nLabels:       e2e-framework=kubectl\n              e2e-run=322dc050-c61b-43b0-8ec7-4963302458f4\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:02:09.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4513" for this suite.

• [SLOW TEST:9.767 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":279,"completed":197,"skipped":3510,"failed":0}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:02:09.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 25 11:02:09.336: INFO: PodSpec: initContainers in spec.initContainers
Jan 25 11:03:15.200: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-96824a93-587c-4645-9fc8-1c09e251d30c", GenerateName:"", Namespace:"init-container-2341", SelfLink:"/api/v1/namespaces/init-container-2341/pods/pod-init-96824a93-587c-4645-9fc8-1c09e251d30c", UID:"046dfde0-0cd9-4464-9995-e24d79341ee9", ResourceVersion:"4232101", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715546929, loc:(*time.Location)(0x7e51ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"336693327"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4q5mf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0009321c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4q5mf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4q5mf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4q5mf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003e5a068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021c7a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e5a100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e5a120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003e5a128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003e5a12c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546929, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546929, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546929, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715546929, loc:(*time.Location)(0x7e51ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc0060b2040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00296e070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00296e0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://25799b54828195d432605b2d3cf05deb8572eab9bd8bb8a5a94a0620d15e9a25", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0060b2080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0060b2060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003e5a1af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:03:15.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2341" for this suite.

• [SLOW TEST:66.064 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":279,"completed":198,"skipped":3515,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:03:15.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 25 11:03:24.414: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:03:24.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2950" for this suite.

• [SLOW TEST:9.305 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":279,"completed":199,"skipped":3531,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:03:24.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:03:24.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929" in namespace "projected-9248" to be "success or failure"
Jan 25 11:03:24.685: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.88614ms
Jan 25 11:03:26.698: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01643095s
Jan 25 11:03:28.711: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029675889s
Jan 25 11:03:30.719: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037282504s
Jan 25 11:03:32.774: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091742738s
Jan 25 11:03:34.788: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 10.106613936s
Jan 25 11:03:36.796: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 12.114296511s
Jan 25 11:03:38.802: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Pending", Reason="", readiness=false. Elapsed: 14.120004691s
Jan 25 11:03:40.811: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.129537508s
STEP: Saw pod success
Jan 25 11:03:40.812: INFO: Pod "downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929" satisfied condition "success or failure"
Jan 25 11:03:40.816: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929 container client-container: 
STEP: delete the pod
Jan 25 11:03:40.926: INFO: Waiting for pod downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929 to disappear
Jan 25 11:03:40.955: INFO: Pod downwardapi-volume-f9eba3fa-1b87-48ad-969b-1067124cf929 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:03:40.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9248" for this suite.

• [SLOW TEST:16.417 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":200,"skipped":3599,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:03:40.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:04:17.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7624" for this suite.

• [SLOW TEST:36.277 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":279,"completed":201,"skipped":3616,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:04:17.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:04:17.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5742" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":279,"completed":202,"skipped":3626,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:04:17.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:04:30.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3736" for this suite.

• [SLOW TEST:12.834 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":279,"completed":203,"skipped":3654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:04:30.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-afe18179-da5c-445e-b6a6-dabd742e23ee in namespace container-probe-4927
Jan 25 11:04:42.568: INFO: Started pod liveness-afe18179-da5c-445e-b6a6-dabd742e23ee in namespace container-probe-4927
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:04:42.574: INFO: Initial restart count of pod liveness-afe18179-da5c-445e-b6a6-dabd742e23ee is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:08:44.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4927" for this suite.

• [SLOW TEST:253.865 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":279,"completed":204,"skipped":3688,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:08:44.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-jrc7
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 11:08:44.400: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jrc7" in namespace "subpath-4344" to be "success or failure"
Jan 25 11:08:44.415: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.710889ms
Jan 25 11:08:46.428: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028107501s
Jan 25 11:08:48.436: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035798911s
Jan 25 11:08:50.446: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046028469s
Jan 25 11:08:52.458: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057761556s
Jan 25 11:08:54.466: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 10.065716686s
Jan 25 11:08:56.477: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 12.076752648s
Jan 25 11:08:58.489: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 14.088865834s
Jan 25 11:09:00.501: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 16.101259104s
Jan 25 11:09:02.513: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 18.113310592s
Jan 25 11:09:04.525: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 20.125524816s
Jan 25 11:09:06.540: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 22.140598036s
Jan 25 11:09:08.552: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 24.152231183s
Jan 25 11:09:10.613: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 26.213074436s
Jan 25 11:09:12.625: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Running", Reason="", readiness=true. Elapsed: 28.225511566s
Jan 25 11:09:14.685: INFO: Pod "pod-subpath-test-secret-jrc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.285583419s
STEP: Saw pod success
Jan 25 11:09:14.686: INFO: Pod "pod-subpath-test-secret-jrc7" satisfied condition "success or failure"
Jan 25 11:09:14.699: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-jrc7 container test-container-subpath-secret-jrc7: 
STEP: delete the pod
Jan 25 11:09:14.744: INFO: Waiting for pod pod-subpath-test-secret-jrc7 to disappear
Jan 25 11:09:14.746: INFO: Pod pod-subpath-test-secret-jrc7 no longer exists
STEP: Deleting pod pod-subpath-test-secret-jrc7
Jan 25 11:09:14.746: INFO: Deleting pod "pod-subpath-test-secret-jrc7" in namespace "subpath-4344"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:09:14.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4344" for this suite.

• [SLOW TEST:30.559 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":279,"completed":205,"skipped":3694,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:09:14.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:09:14.859: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:09:15.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4201" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":279,"completed":206,"skipped":3694,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:09:15.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 11:09:30.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 11:09:30.004: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 11:09:32.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 11:09:32.012: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 11:09:34.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 11:09:34.014: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 25 11:09:36.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 25 11:09:36.011: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:09:36.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6783" for this suite.

• [SLOW TEST:20.406 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":279,"completed":207,"skipped":3696,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:09:36.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:09:42.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8473" for this suite.
STEP: Destroying namespace "nsdeletetest-3441" for this suite.
Jan 25 11:09:42.869: INFO: Namespace nsdeletetest-3441 was already deleted
STEP: Destroying namespace "nsdeletetest-6321" for this suite.

• [SLOW TEST:6.850 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":279,"completed":208,"skipped":3708,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:09:42.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 25 11:09:43.281: INFO: Waiting up to 5m0s for pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80" in namespace "emptydir-3303" to be "success or failure"
Jan 25 11:09:43.361: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 80.133879ms
Jan 25 11:09:45.370: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088982101s
Jan 25 11:09:47.385: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103847236s
Jan 25 11:09:49.396: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114194385s
Jan 25 11:09:51.401: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120018344s
Jan 25 11:09:53.463: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181569135s
Jan 25 11:09:55.469: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.187373277s
STEP: Saw pod success
Jan 25 11:09:55.469: INFO: Pod "pod-5569dcc2-f98f-4cc8-9124-ce1239820c80" satisfied condition "success or failure"
Jan 25 11:09:55.473: INFO: Trying to get logs from node jerma-node pod pod-5569dcc2-f98f-4cc8-9124-ce1239820c80 container test-container: 
STEP: delete the pod
Jan 25 11:09:56.400: INFO: Waiting for pod pod-5569dcc2-f98f-4cc8-9124-ce1239820c80 to disappear
Jan 25 11:09:56.405: INFO: Pod pod-5569dcc2-f98f-4cc8-9124-ce1239820c80 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:09:56.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3303" for this suite.

• [SLOW TEST:13.535 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":209,"skipped":3709,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:09:56.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 25 11:09:56.627: INFO: Waiting up to 5m0s for pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7" in namespace "emptydir-7951" to be "success or failure"
Jan 25 11:09:56.642: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.418586ms
Jan 25 11:09:58.663: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035382682s
Jan 25 11:10:00.668: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040413964s
Jan 25 11:10:02.677: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049757746s
Jan 25 11:10:04.688: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060757055s
Jan 25 11:10:06.704: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076814672s
STEP: Saw pod success
Jan 25 11:10:06.704: INFO: Pod "pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7" satisfied condition "success or failure"
Jan 25 11:10:06.711: INFO: Trying to get logs from node jerma-node pod pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7 container test-container: 
STEP: delete the pod
Jan 25 11:10:06.872: INFO: Waiting for pod pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7 to disappear
Jan 25 11:10:06.890: INFO: Pod pod-f34fab54-51bd-45c6-b7df-6bd90c925ec7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:06.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7951" for this suite.

• [SLOW TEST:10.493 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":210,"skipped":3720,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:06.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-3d1757fc-607f-4a77-976b-15585c0668e6
STEP: Creating a pod to test consume secrets
Jan 25 11:10:07.376: INFO: Waiting up to 5m0s for pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408" in namespace "secrets-5687" to be "success or failure"
Jan 25 11:10:07.390: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 14.317692ms
Jan 25 11:10:09.401: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02466096s
Jan 25 11:10:11.411: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035089819s
Jan 25 11:10:13.425: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049254125s
Jan 25 11:10:15.434: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058154408s
Jan 25 11:10:17.443: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067186016s
Jan 25 11:10:19.449: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.073254928s
STEP: Saw pod success
Jan 25 11:10:19.449: INFO: Pod "pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408" satisfied condition "success or failure"
Jan 25 11:10:19.453: INFO: Trying to get logs from node jerma-node pod pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:10:19.507: INFO: Waiting for pod pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408 to disappear
Jan 25 11:10:19.515: INFO: Pod pod-secrets-9fa1a81b-8212-4091-aa30-9706476ea408 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5687" for this suite.
STEP: Destroying namespace "secret-namespace-7672" for this suite.

• [SLOW TEST:12.627 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":279,"completed":211,"skipped":3721,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:19.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:19.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4027" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":279,"completed":212,"skipped":3734,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:19.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 11:10:20.177: INFO: Waiting up to 5m0s for pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2" in namespace "emptydir-8784" to be "success or failure"
Jan 25 11:10:20.272: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Pending", Reason="", readiness=false. Elapsed: 94.803456ms
Jan 25 11:10:22.280: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102949356s
Jan 25 11:10:24.292: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115121412s
Jan 25 11:10:26.302: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12529235s
Jan 25 11:10:28.312: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134506575s
Jan 25 11:10:30.327: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150296775s
STEP: Saw pod success
Jan 25 11:10:30.328: INFO: Pod "pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2" satisfied condition "success or failure"
Jan 25 11:10:30.336: INFO: Trying to get logs from node jerma-node pod pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2 container test-container: 
STEP: delete the pod
Jan 25 11:10:30.918: INFO: Waiting for pod pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2 to disappear
Jan 25 11:10:30.923: INFO: Pod pod-b60195d7-4add-43d1-bf66-1d1a80fcb3d2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:30.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8784" for this suite.

• [SLOW TEST:10.978 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":213,"skipped":3735,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:30.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7b8b47ad-982b-4f9f-9559-892b89d9cf90
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7b8b47ad-982b-4f9f-9559-892b89d9cf90
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2259" for this suite.

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":214,"skipped":3739,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:41.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-5600a0d3-768c-4f4d-b2f6-556cf1c9555e
STEP: Creating a pod to test consume configMaps
Jan 25 11:10:41.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875" in namespace "configmap-8028" to be "success or failure"
Jan 25 11:10:41.428: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708373ms
Jan 25 11:10:43.439: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020124516s
Jan 25 11:10:45.449: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029466276s
Jan 25 11:10:47.463: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043333432s
Jan 25 11:10:49.470: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051201279s
Jan 25 11:10:51.480: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060821494s
STEP: Saw pod success
Jan 25 11:10:51.481: INFO: Pod "pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875" satisfied condition "success or failure"
Jan 25 11:10:51.485: INFO: Trying to get logs from node jerma-node pod pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875 container configmap-volume-test: 
STEP: delete the pod
Jan 25 11:10:51.538: INFO: Waiting for pod pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875 to disappear
Jan 25 11:10:51.551: INFO: Pod pod-configmaps-beee0be1-368d-40a6-90b7-7d70c90e7875 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:51.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8028" for this suite.

• [SLOW TEST:10.256 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":279,"completed":215,"skipped":3744,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:51.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:10:51.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9777" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":279,"completed":216,"skipped":3762,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:10:51.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 11:11:12.213: INFO: DNS probes using dns-test-4d49cd75-4db5-47be-b381-fb2d7ceb75c2 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 11:11:26.434: INFO: File wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:26.442: INFO: File jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:26.442: INFO: Lookups using dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea failed for: [wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local]

Jan 25 11:11:31.467: INFO: File wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:31.472: INFO: File jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:31.472: INFO: Lookups using dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea failed for: [wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local]

Jan 25 11:11:36.452: INFO: File wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:36.458: INFO: File jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local from pod  dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 11:11:36.458: INFO: Lookups using dns-7267/dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea failed for: [wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local]

Jan 25 11:11:41.463: INFO: DNS probes using dns-test-1db98f2f-36f3-4b1f-a3a3-4101c5ab9bea succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7267.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7267.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 11:11:57.950: INFO: DNS probes using dns-test-5f1ed709-da61-4a0f-9eed-b44db800588e succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:11:58.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7267" for this suite.

• [SLOW TEST:66.247 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":279,"completed":217,"skipped":3765,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:11:58.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 11:11:58.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4203'
Jan 25 11:12:01.888: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 11:12:01.888: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Jan 25 11:12:03.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4203'
Jan 25 11:12:04.144: INFO: stderr: ""
Jan 25 11:12:04.144: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:12:04.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4203" for this suite.

• [SLOW TEST:6.023 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1592
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":279,"completed":218,"skipped":3773,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:12:04.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-aedd2213-80ec-4e73-bb27-dd5463b7a705 in namespace container-probe-3436
Jan 25 11:12:16.723: INFO: Started pod test-webserver-aedd2213-80ec-4e73-bb27-dd5463b7a705 in namespace container-probe-3436
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:12:16.727: INFO: Initial restart count of pod test-webserver-aedd2213-80ec-4e73-bb27-dd5463b7a705 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:16:18.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3436" for this suite.

• [SLOW TEST:254.136 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":279,"completed":219,"skipped":3780,"failed":0}
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:16:18.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 11:16:18.458: INFO: Waiting up to 5m0s for pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5" in namespace "downward-api-4375" to be "success or failure"
Jan 25 11:16:18.470: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.317717ms
Jan 25 11:16:20.481: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022648605s
Jan 25 11:16:22.570: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111750918s
Jan 25 11:16:24.579: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120638278s
Jan 25 11:16:26.636: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177828681s
Jan 25 11:16:28.643: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185009545s
Jan 25 11:16:30.662: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.203939527s
STEP: Saw pod success
Jan 25 11:16:30.663: INFO: Pod "downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5" satisfied condition "success or failure"
Jan 25 11:16:30.670: INFO: Trying to get logs from node jerma-node pod downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5 container dapi-container: 
STEP: delete the pod
Jan 25 11:16:30.927: INFO: Waiting for pod downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5 to disappear
Jan 25 11:16:30.945: INFO: Pod downward-api-ab69caaf-adbf-400d-ba8f-5bb53d9893b5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:16:30.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4375" for this suite.

• [SLOW TEST:12.638 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":279,"completed":220,"skipped":3780,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:16:30.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:16:31.112: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf" in namespace "security-context-test-6395" to be "success or failure"
Jan 25 11:16:31.117: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653558ms
Jan 25 11:16:33.129: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016935271s
Jan 25 11:16:35.141: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028077865s
Jan 25 11:16:37.149: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036412917s
Jan 25 11:16:39.158: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045226768s
Jan 25 11:16:41.165: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.052039955s
Jan 25 11:16:43.172: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.059660793s
Jan 25 11:16:43.172: INFO: Pod "alpine-nnp-false-33cc8175-ed49-4bc7-8403-085286522edf" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:16:43.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6395" for this suite.

• [SLOW TEST:12.304 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":221,"skipped":3787,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:16:43.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 25 11:17:07.653: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:07.653: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:07.722987       9 log.go:172] (0xc001634210) (0xc001e5eaa0) Create stream
I0125 11:17:07.723196       9 log.go:172] (0xc001634210) (0xc001e5eaa0) Stream added, broadcasting: 1
I0125 11:17:07.732662       9 log.go:172] (0xc001634210) Reply frame received for 1
I0125 11:17:07.732799       9 log.go:172] (0xc001634210) (0xc002c3c000) Create stream
I0125 11:17:07.732841       9 log.go:172] (0xc001634210) (0xc002c3c000) Stream added, broadcasting: 3
I0125 11:17:07.735987       9 log.go:172] (0xc001634210) Reply frame received for 3
I0125 11:17:07.736041       9 log.go:172] (0xc001634210) (0xc001e5eb40) Create stream
I0125 11:17:07.736072       9 log.go:172] (0xc001634210) (0xc001e5eb40) Stream added, broadcasting: 5
I0125 11:17:07.740042       9 log.go:172] (0xc001634210) Reply frame received for 5
I0125 11:17:07.856790       9 log.go:172] (0xc001634210) Data frame received for 3
I0125 11:17:07.857000       9 log.go:172] (0xc002c3c000) (3) Data frame handling
I0125 11:17:07.857042       9 log.go:172] (0xc002c3c000) (3) Data frame sent
I0125 11:17:07.946216       9 log.go:172] (0xc001634210) Data frame received for 1
I0125 11:17:07.946353       9 log.go:172] (0xc001634210) (0xc002c3c000) Stream removed, broadcasting: 3
I0125 11:17:07.946485       9 log.go:172] (0xc001e5eaa0) (1) Data frame handling
I0125 11:17:07.946537       9 log.go:172] (0xc001e5eaa0) (1) Data frame sent
I0125 11:17:07.946626       9 log.go:172] (0xc001634210) (0xc001e5eb40) Stream removed, broadcasting: 5
I0125 11:17:07.946755       9 log.go:172] (0xc001634210) (0xc001e5eaa0) Stream removed, broadcasting: 1
I0125 11:17:07.946827       9 log.go:172] (0xc001634210) Go away received
I0125 11:17:07.947096       9 log.go:172] (0xc001634210) (0xc001e5eaa0) Stream removed, broadcasting: 1
I0125 11:17:07.947126       9 log.go:172] (0xc001634210) (0xc002c3c000) Stream removed, broadcasting: 3
I0125 11:17:07.947146       9 log.go:172] (0xc001634210) (0xc001e5eb40) Stream removed, broadcasting: 5
Jan 25 11:17:07.947: INFO: Exec stderr: ""
Jan 25 11:17:07.947: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:07.947: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:07.991086       9 log.go:172] (0xc001d3ef20) (0xc0015c4500) Create stream
I0125 11:17:07.991195       9 log.go:172] (0xc001d3ef20) (0xc0015c4500) Stream added, broadcasting: 1
I0125 11:17:07.996676       9 log.go:172] (0xc001d3ef20) Reply frame received for 1
I0125 11:17:07.996742       9 log.go:172] (0xc001d3ef20) (0xc0016c6000) Create stream
I0125 11:17:07.996757       9 log.go:172] (0xc001d3ef20) (0xc0016c6000) Stream added, broadcasting: 3
I0125 11:17:07.999902       9 log.go:172] (0xc001d3ef20) Reply frame received for 3
I0125 11:17:07.999944       9 log.go:172] (0xc001d3ef20) (0xc001ece0a0) Create stream
I0125 11:17:07.999958       9 log.go:172] (0xc001d3ef20) (0xc001ece0a0) Stream added, broadcasting: 5
I0125 11:17:08.001524       9 log.go:172] (0xc001d3ef20) Reply frame received for 5
I0125 11:17:08.070678       9 log.go:172] (0xc001d3ef20) Data frame received for 3
I0125 11:17:08.070848       9 log.go:172] (0xc0016c6000) (3) Data frame handling
I0125 11:17:08.070904       9 log.go:172] (0xc0016c6000) (3) Data frame sent
I0125 11:17:08.148962       9 log.go:172] (0xc001d3ef20) (0xc0016c6000) Stream removed, broadcasting: 3
I0125 11:17:08.149118       9 log.go:172] (0xc001d3ef20) Data frame received for 1
I0125 11:17:08.149180       9 log.go:172] (0xc001d3ef20) (0xc001ece0a0) Stream removed, broadcasting: 5
I0125 11:17:08.149254       9 log.go:172] (0xc0015c4500) (1) Data frame handling
I0125 11:17:08.149272       9 log.go:172] (0xc0015c4500) (1) Data frame sent
I0125 11:17:08.149285       9 log.go:172] (0xc001d3ef20) (0xc0015c4500) Stream removed, broadcasting: 1
I0125 11:17:08.149304       9 log.go:172] (0xc001d3ef20) Go away received
I0125 11:17:08.150390       9 log.go:172] (0xc001d3ef20) (0xc0015c4500) Stream removed, broadcasting: 1
I0125 11:17:08.150480       9 log.go:172] (0xc001d3ef20) (0xc0016c6000) Stream removed, broadcasting: 3
I0125 11:17:08.150489       9 log.go:172] (0xc001d3ef20) (0xc001ece0a0) Stream removed, broadcasting: 5
Jan 25 11:17:08.150: INFO: Exec stderr: ""
Jan 25 11:17:08.150: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:08.150: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:08.187934       9 log.go:172] (0xc001690420) (0xc001ece460) Create stream
I0125 11:17:08.188019       9 log.go:172] (0xc001690420) (0xc001ece460) Stream added, broadcasting: 1
I0125 11:17:08.202424       9 log.go:172] (0xc001690420) Reply frame received for 1
I0125 11:17:08.202528       9 log.go:172] (0xc001690420) (0xc0015c4780) Create stream
I0125 11:17:08.202583       9 log.go:172] (0xc001690420) (0xc0015c4780) Stream added, broadcasting: 3
I0125 11:17:08.206446       9 log.go:172] (0xc001690420) Reply frame received for 3
I0125 11:17:08.206522       9 log.go:172] (0xc001690420) (0xc002c3c1e0) Create stream
I0125 11:17:08.206539       9 log.go:172] (0xc001690420) (0xc002c3c1e0) Stream added, broadcasting: 5
I0125 11:17:08.209415       9 log.go:172] (0xc001690420) Reply frame received for 5
I0125 11:17:08.273668       9 log.go:172] (0xc001690420) Data frame received for 3
I0125 11:17:08.273757       9 log.go:172] (0xc0015c4780) (3) Data frame handling
I0125 11:17:08.273806       9 log.go:172] (0xc0015c4780) (3) Data frame sent
I0125 11:17:08.358919       9 log.go:172] (0xc001690420) (0xc0015c4780) Stream removed, broadcasting: 3
I0125 11:17:08.359051       9 log.go:172] (0xc001690420) Data frame received for 1
I0125 11:17:08.359100       9 log.go:172] (0xc001ece460) (1) Data frame handling
I0125 11:17:08.359146       9 log.go:172] (0xc001690420) (0xc002c3c1e0) Stream removed, broadcasting: 5
I0125 11:17:08.359205       9 log.go:172] (0xc001ece460) (1) Data frame sent
I0125 11:17:08.359230       9 log.go:172] (0xc001690420) (0xc001ece460) Stream removed, broadcasting: 1
I0125 11:17:08.359261       9 log.go:172] (0xc001690420) Go away received
I0125 11:17:08.359748       9 log.go:172] (0xc001690420) (0xc001ece460) Stream removed, broadcasting: 1
I0125 11:17:08.360010       9 log.go:172] (0xc001690420) (0xc0015c4780) Stream removed, broadcasting: 3
I0125 11:17:08.360040       9 log.go:172] (0xc001690420) (0xc002c3c1e0) Stream removed, broadcasting: 5
Jan 25 11:17:08.360: INFO: Exec stderr: ""
Jan 25 11:17:08.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:08.360: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:08.399603       9 log.go:172] (0xc001634840) (0xc001e5edc0) Create stream
I0125 11:17:08.399674       9 log.go:172] (0xc001634840) (0xc001e5edc0) Stream added, broadcasting: 1
I0125 11:17:08.402709       9 log.go:172] (0xc001634840) Reply frame received for 1
I0125 11:17:08.402771       9 log.go:172] (0xc001634840) (0xc0016c6500) Create stream
I0125 11:17:08.402785       9 log.go:172] (0xc001634840) (0xc0016c6500) Stream added, broadcasting: 3
I0125 11:17:08.404230       9 log.go:172] (0xc001634840) Reply frame received for 3
I0125 11:17:08.404257       9 log.go:172] (0xc001634840) (0xc001ece820) Create stream
I0125 11:17:08.404267       9 log.go:172] (0xc001634840) (0xc001ece820) Stream added, broadcasting: 5
I0125 11:17:08.405844       9 log.go:172] (0xc001634840) Reply frame received for 5
I0125 11:17:08.462113       9 log.go:172] (0xc001634840) Data frame received for 3
I0125 11:17:08.462296       9 log.go:172] (0xc0016c6500) (3) Data frame handling
I0125 11:17:08.462331       9 log.go:172] (0xc0016c6500) (3) Data frame sent
I0125 11:17:08.549183       9 log.go:172] (0xc001634840) (0xc0016c6500) Stream removed, broadcasting: 3
I0125 11:17:08.549629       9 log.go:172] (0xc001634840) Data frame received for 1
I0125 11:17:08.549736       9 log.go:172] (0xc001e5edc0) (1) Data frame handling
I0125 11:17:08.549810       9 log.go:172] (0xc001e5edc0) (1) Data frame sent
I0125 11:17:08.549997       9 log.go:172] (0xc001634840) (0xc001e5edc0) Stream removed, broadcasting: 1
I0125 11:17:08.550472       9 log.go:172] (0xc001634840) (0xc001ece820) Stream removed, broadcasting: 5
I0125 11:17:08.550846       9 log.go:172] (0xc001634840) Go away received
I0125 11:17:08.551197       9 log.go:172] (0xc001634840) (0xc001e5edc0) Stream removed, broadcasting: 1
I0125 11:17:08.551258       9 log.go:172] (0xc001634840) (0xc0016c6500) Stream removed, broadcasting: 3
I0125 11:17:08.551311       9 log.go:172] (0xc001634840) (0xc001ece820) Stream removed, broadcasting: 5
Jan 25 11:17:08.551: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 25 11:17:08.552: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:08.552: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:08.599842       9 log.go:172] (0xc000e9a2c0) (0xc000273b80) Create stream
I0125 11:17:08.599986       9 log.go:172] (0xc000e9a2c0) (0xc000273b80) Stream added, broadcasting: 1
I0125 11:17:08.604087       9 log.go:172] (0xc000e9a2c0) Reply frame received for 1
I0125 11:17:08.604146       9 log.go:172] (0xc000e9a2c0) (0xc0022b0000) Create stream
I0125 11:17:08.604158       9 log.go:172] (0xc000e9a2c0) (0xc0022b0000) Stream added, broadcasting: 3
I0125 11:17:08.606934       9 log.go:172] (0xc000e9a2c0) Reply frame received for 3
I0125 11:17:08.607220       9 log.go:172] (0xc000e9a2c0) (0xc001e5ef00) Create stream
I0125 11:17:08.607254       9 log.go:172] (0xc000e9a2c0) (0xc001e5ef00) Stream added, broadcasting: 5
I0125 11:17:08.609834       9 log.go:172] (0xc000e9a2c0) Reply frame received for 5
I0125 11:17:08.666278       9 log.go:172] (0xc000e9a2c0) Data frame received for 3
I0125 11:17:08.666326       9 log.go:172] (0xc0022b0000) (3) Data frame handling
I0125 11:17:08.666353       9 log.go:172] (0xc0022b0000) (3) Data frame sent
I0125 11:17:08.732709       9 log.go:172] (0xc000e9a2c0) (0xc0022b0000) Stream removed, broadcasting: 3
I0125 11:17:08.732952       9 log.go:172] (0xc000e9a2c0) Data frame received for 1
I0125 11:17:08.732976       9 log.go:172] (0xc000273b80) (1) Data frame handling
I0125 11:17:08.733011       9 log.go:172] (0xc000273b80) (1) Data frame sent
I0125 11:17:08.733024       9 log.go:172] (0xc000e9a2c0) (0xc000273b80) Stream removed, broadcasting: 1
I0125 11:17:08.733446       9 log.go:172] (0xc000e9a2c0) (0xc001e5ef00) Stream removed, broadcasting: 5
I0125 11:17:08.733548       9 log.go:172] (0xc000e9a2c0) (0xc000273b80) Stream removed, broadcasting: 1
I0125 11:17:08.733561       9 log.go:172] (0xc000e9a2c0) (0xc0022b0000) Stream removed, broadcasting: 3
I0125 11:17:08.733575       9 log.go:172] (0xc000e9a2c0) (0xc001e5ef00) Stream removed, broadcasting: 5
Jan 25 11:17:08.734: INFO: Exec stderr: ""
Jan 25 11:17:08.734: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:08.734: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:08.734641       9 log.go:172] (0xc000e9a2c0) Go away received
I0125 11:17:08.797704       9 log.go:172] (0xc001634dc0) (0xc001e5f680) Create stream
I0125 11:17:08.798117       9 log.go:172] (0xc001634dc0) (0xc001e5f680) Stream added, broadcasting: 1
I0125 11:17:08.805581       9 log.go:172] (0xc001634dc0) Reply frame received for 1
I0125 11:17:08.805930       9 log.go:172] (0xc001634dc0) (0xc002c3c460) Create stream
I0125 11:17:08.806008       9 log.go:172] (0xc001634dc0) (0xc002c3c460) Stream added, broadcasting: 3
I0125 11:17:08.808609       9 log.go:172] (0xc001634dc0) Reply frame received for 3
I0125 11:17:08.808670       9 log.go:172] (0xc001634dc0) (0xc0015c4820) Create stream
I0125 11:17:08.808695       9 log.go:172] (0xc001634dc0) (0xc0015c4820) Stream added, broadcasting: 5
I0125 11:17:08.810375       9 log.go:172] (0xc001634dc0) Reply frame received for 5
I0125 11:17:08.892081       9 log.go:172] (0xc001634dc0) Data frame received for 3
I0125 11:17:08.892330       9 log.go:172] (0xc002c3c460) (3) Data frame handling
I0125 11:17:08.892399       9 log.go:172] (0xc002c3c460) (3) Data frame sent
I0125 11:17:08.983680       9 log.go:172] (0xc001634dc0) (0xc002c3c460) Stream removed, broadcasting: 3
I0125 11:17:08.984045       9 log.go:172] (0xc001634dc0) Data frame received for 1
I0125 11:17:08.984265       9 log.go:172] (0xc001634dc0) (0xc0015c4820) Stream removed, broadcasting: 5
I0125 11:17:08.984477       9 log.go:172] (0xc001e5f680) (1) Data frame handling
I0125 11:17:08.984520       9 log.go:172] (0xc001e5f680) (1) Data frame sent
I0125 11:17:08.984547       9 log.go:172] (0xc001634dc0) (0xc001e5f680) Stream removed, broadcasting: 1
I0125 11:17:08.984587       9 log.go:172] (0xc001634dc0) Go away received
I0125 11:17:08.985232       9 log.go:172] (0xc001634dc0) (0xc001e5f680) Stream removed, broadcasting: 1
I0125 11:17:08.985266       9 log.go:172] (0xc001634dc0) (0xc002c3c460) Stream removed, broadcasting: 3
I0125 11:17:08.985284       9 log.go:172] (0xc001634dc0) (0xc0015c4820) Stream removed, broadcasting: 5
Jan 25 11:17:08.985: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 25 11:17:08.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:08.986: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:09.020920       9 log.go:172] (0xc001d3f760) (0xc0015c4b40) Create stream
I0125 11:17:09.021048       9 log.go:172] (0xc001d3f760) (0xc0015c4b40) Stream added, broadcasting: 1
I0125 11:17:09.024909       9 log.go:172] (0xc001d3f760) Reply frame received for 1
I0125 11:17:09.024987       9 log.go:172] (0xc001d3f760) (0xc0022b01e0) Create stream
I0125 11:17:09.024996       9 log.go:172] (0xc001d3f760) (0xc0022b01e0) Stream added, broadcasting: 3
I0125 11:17:09.026291       9 log.go:172] (0xc001d3f760) Reply frame received for 3
I0125 11:17:09.026343       9 log.go:172] (0xc001d3f760) (0xc0022b0320) Create stream
I0125 11:17:09.026367       9 log.go:172] (0xc001d3f760) (0xc0022b0320) Stream added, broadcasting: 5
I0125 11:17:09.028267       9 log.go:172] (0xc001d3f760) Reply frame received for 5
I0125 11:17:09.089472       9 log.go:172] (0xc001d3f760) Data frame received for 3
I0125 11:17:09.089891       9 log.go:172] (0xc0022b01e0) (3) Data frame handling
I0125 11:17:09.090082       9 log.go:172] (0xc0022b01e0) (3) Data frame sent
I0125 11:17:09.183995       9 log.go:172] (0xc001d3f760) (0xc0022b01e0) Stream removed, broadcasting: 3
I0125 11:17:09.184324       9 log.go:172] (0xc001d3f760) Data frame received for 1
I0125 11:17:09.184461       9 log.go:172] (0xc001d3f760) (0xc0022b0320) Stream removed, broadcasting: 5
I0125 11:17:09.184612       9 log.go:172] (0xc0015c4b40) (1) Data frame handling
I0125 11:17:09.184672       9 log.go:172] (0xc0015c4b40) (1) Data frame sent
I0125 11:17:09.184689       9 log.go:172] (0xc001d3f760) (0xc0015c4b40) Stream removed, broadcasting: 1
I0125 11:17:09.184760       9 log.go:172] (0xc001d3f760) Go away received
I0125 11:17:09.186413       9 log.go:172] (0xc001d3f760) (0xc0015c4b40) Stream removed, broadcasting: 1
I0125 11:17:09.186704       9 log.go:172] (0xc001d3f760) (0xc0022b01e0) Stream removed, broadcasting: 3
I0125 11:17:09.186719       9 log.go:172] (0xc001d3f760) (0xc0022b0320) Stream removed, broadcasting: 5
Jan 25 11:17:09.186: INFO: Exec stderr: ""
Jan 25 11:17:09.186: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:09.187: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:09.241963       9 log.go:172] (0xc0012b4370) (0xc002c3cb40) Create stream
I0125 11:17:09.242537       9 log.go:172] (0xc0012b4370) (0xc002c3cb40) Stream added, broadcasting: 1
I0125 11:17:09.252857       9 log.go:172] (0xc0012b4370) Reply frame received for 1
I0125 11:17:09.252953       9 log.go:172] (0xc0012b4370) (0xc0015c4f00) Create stream
I0125 11:17:09.252975       9 log.go:172] (0xc0012b4370) (0xc0015c4f00) Stream added, broadcasting: 3
I0125 11:17:09.254527       9 log.go:172] (0xc0012b4370) Reply frame received for 3
I0125 11:17:09.254569       9 log.go:172] (0xc0012b4370) (0xc001ece8c0) Create stream
I0125 11:17:09.254589       9 log.go:172] (0xc0012b4370) (0xc001ece8c0) Stream added, broadcasting: 5
I0125 11:17:09.256565       9 log.go:172] (0xc0012b4370) Reply frame received for 5
I0125 11:17:09.333205       9 log.go:172] (0xc0012b4370) Data frame received for 3
I0125 11:17:09.333366       9 log.go:172] (0xc0015c4f00) (3) Data frame handling
I0125 11:17:09.333407       9 log.go:172] (0xc0015c4f00) (3) Data frame sent
I0125 11:17:09.407685       9 log.go:172] (0xc0012b4370) (0xc0015c4f00) Stream removed, broadcasting: 3
I0125 11:17:09.407958       9 log.go:172] (0xc0012b4370) Data frame received for 1
I0125 11:17:09.408224       9 log.go:172] (0xc0012b4370) (0xc001ece8c0) Stream removed, broadcasting: 5
I0125 11:17:09.408353       9 log.go:172] (0xc002c3cb40) (1) Data frame handling
I0125 11:17:09.408396       9 log.go:172] (0xc002c3cb40) (1) Data frame sent
I0125 11:17:09.408424       9 log.go:172] (0xc0012b4370) (0xc002c3cb40) Stream removed, broadcasting: 1
I0125 11:17:09.408481       9 log.go:172] (0xc0012b4370) Go away received
I0125 11:17:09.408843       9 log.go:172] (0xc0012b4370) (0xc002c3cb40) Stream removed, broadcasting: 1
I0125 11:17:09.408884       9 log.go:172] (0xc0012b4370) (0xc0015c4f00) Stream removed, broadcasting: 3
I0125 11:17:09.408906       9 log.go:172] (0xc0012b4370) (0xc001ece8c0) Stream removed, broadcasting: 5
Jan 25 11:17:09.408: INFO: Exec stderr: ""
Jan 25 11:17:09.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:09.409: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:09.444120       9 log.go:172] (0xc000e9a9a0) (0xc0022b0b40) Create stream
I0125 11:17:09.444204       9 log.go:172] (0xc000e9a9a0) (0xc0022b0b40) Stream added, broadcasting: 1
I0125 11:17:09.457489       9 log.go:172] (0xc000e9a9a0) Reply frame received for 1
I0125 11:17:09.457632       9 log.go:172] (0xc000e9a9a0) (0xc002c3cbe0) Create stream
I0125 11:17:09.457656       9 log.go:172] (0xc000e9a9a0) (0xc002c3cbe0) Stream added, broadcasting: 3
I0125 11:17:09.461280       9 log.go:172] (0xc000e9a9a0) Reply frame received for 3
I0125 11:17:09.461468       9 log.go:172] (0xc000e9a9a0) (0xc001ece960) Create stream
I0125 11:17:09.461493       9 log.go:172] (0xc000e9a9a0) (0xc001ece960) Stream added, broadcasting: 5
I0125 11:17:09.464001       9 log.go:172] (0xc000e9a9a0) Reply frame received for 5
I0125 11:17:09.532172       9 log.go:172] (0xc000e9a9a0) Data frame received for 3
I0125 11:17:09.532357       9 log.go:172] (0xc002c3cbe0) (3) Data frame handling
I0125 11:17:09.532392       9 log.go:172] (0xc002c3cbe0) (3) Data frame sent
I0125 11:17:09.615489       9 log.go:172] (0xc000e9a9a0) (0xc002c3cbe0) Stream removed, broadcasting: 3
I0125 11:17:09.615841       9 log.go:172] (0xc000e9a9a0) Data frame received for 1
I0125 11:17:09.616126       9 log.go:172] (0xc000e9a9a0) (0xc001ece960) Stream removed, broadcasting: 5
I0125 11:17:09.616178       9 log.go:172] (0xc0022b0b40) (1) Data frame handling
I0125 11:17:09.616191       9 log.go:172] (0xc0022b0b40) (1) Data frame sent
I0125 11:17:09.616200       9 log.go:172] (0xc000e9a9a0) (0xc0022b0b40) Stream removed, broadcasting: 1
I0125 11:17:09.616222       9 log.go:172] (0xc000e9a9a0) Go away received
I0125 11:17:09.616737       9 log.go:172] (0xc000e9a9a0) (0xc0022b0b40) Stream removed, broadcasting: 1
I0125 11:17:09.616981       9 log.go:172] (0xc000e9a9a0) (0xc002c3cbe0) Stream removed, broadcasting: 3
I0125 11:17:09.617014       9 log.go:172] (0xc000e9a9a0) (0xc001ece960) Stream removed, broadcasting: 5
Jan 25 11:17:09.617: INFO: Exec stderr: ""
Jan 25 11:17:09.617: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8757 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:17:09.617: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:17:09.679204       9 log.go:172] (0xc001635550) (0xc001e5f9a0) Create stream
I0125 11:17:09.679380       9 log.go:172] (0xc001635550) (0xc001e5f9a0) Stream added, broadcasting: 1
I0125 11:17:09.685086       9 log.go:172] (0xc001635550) Reply frame received for 1
I0125 11:17:09.685131       9 log.go:172] (0xc001635550) (0xc002c3cc80) Create stream
I0125 11:17:09.685143       9 log.go:172] (0xc001635550) (0xc002c3cc80) Stream added, broadcasting: 3
I0125 11:17:09.686175       9 log.go:172] (0xc001635550) Reply frame received for 3
I0125 11:17:09.686201       9 log.go:172] (0xc001635550) (0xc0015c4fa0) Create stream
I0125 11:17:09.686212       9 log.go:172] (0xc001635550) (0xc0015c4fa0) Stream added, broadcasting: 5
I0125 11:17:09.687776       9 log.go:172] (0xc001635550) Reply frame received for 5
I0125 11:17:09.748304       9 log.go:172] (0xc001635550) Data frame received for 3
I0125 11:17:09.748643       9 log.go:172] (0xc002c3cc80) (3) Data frame handling
I0125 11:17:09.748705       9 log.go:172] (0xc002c3cc80) (3) Data frame sent
I0125 11:17:09.844635       9 log.go:172] (0xc001635550) Data frame received for 1
I0125 11:17:09.844880       9 log.go:172] (0xc001635550) (0xc002c3cc80) Stream removed, broadcasting: 3
I0125 11:17:09.844958       9 log.go:172] (0xc001e5f9a0) (1) Data frame handling
I0125 11:17:09.845038       9 log.go:172] (0xc001e5f9a0) (1) Data frame sent
I0125 11:17:09.845328       9 log.go:172] (0xc001635550) (0xc0015c4fa0) Stream removed, broadcasting: 5
I0125 11:17:09.845422       9 log.go:172] (0xc001635550) (0xc001e5f9a0) Stream removed, broadcasting: 1
I0125 11:17:09.845464       9 log.go:172] (0xc001635550) Go away received
I0125 11:17:09.846418       9 log.go:172] (0xc001635550) (0xc001e5f9a0) Stream removed, broadcasting: 1
I0125 11:17:09.846467       9 log.go:172] (0xc001635550) (0xc002c3cc80) Stream removed, broadcasting: 3
I0125 11:17:09.846483       9 log.go:172] (0xc001635550) (0xc0015c4fa0) Stream removed, broadcasting: 5
Jan 25 11:17:09.846: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:17:09.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8757" for this suite.

• [SLOW TEST:26.642 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":222,"skipped":3800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:17:09.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9734
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-9734
Jan 25 11:17:09.998: INFO: Found 0 stateful pods, waiting for 1
Jan 25 11:17:20.114: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 25 11:17:30.005: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 25 11:17:30.040: INFO: Deleting all statefulset in ns statefulset-9734
Jan 25 11:17:30.062: INFO: Scaling statefulset ss to 0
Jan 25 11:18:00.243: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 11:18:00.248: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:18:00.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9734" for this suite.

• [SLOW TEST:50.395 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":279,"completed":223,"skipped":3835,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:18:00.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:18:16.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-641" for this suite.

• [SLOW TEST:16.229 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":279,"completed":224,"skipped":3844,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:18:16.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Jan 25 11:18:16.785: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Jan 25 11:18:17.696: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 25 11:18:20.099: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:22.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:24.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:26.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:28.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:30.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:32.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715547897, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:18:35.065: INFO: Waited 943.231293ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:18:35.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8411" for this suite.

• [SLOW TEST:19.140 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":279,"completed":225,"skipped":3879,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:18:35.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-74b552ff-5860-473b-bd6b-3e250a6bdc95 in namespace container-probe-9790
Jan 25 11:18:50.008: INFO: Started pod busybox-74b552ff-5860-473b-bd6b-3e250a6bdc95 in namespace container-probe-9790
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:18:50.012: INFO: Initial restart count of pod busybox-74b552ff-5860-473b-bd6b-3e250a6bdc95 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:22:50.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9790" for this suite.

• [SLOW TEST:254.575 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":279,"completed":226,"skipped":3880,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:22:50.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-2lxb
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 11:22:50.503: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2lxb" in namespace "subpath-5263" to be "success or failure"
Jan 25 11:22:51.124: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 620.142953ms
Jan 25 11:22:53.133: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.629460079s
Jan 25 11:22:55.140: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.636327764s
Jan 25 11:22:57.149: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64501658s
Jan 25 11:22:59.163: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659167447s
Jan 25 11:23:01.175: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67103541s
Jan 25 11:23:03.183: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 12.679086281s
Jan 25 11:23:05.203: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 14.69896871s
Jan 25 11:23:07.214: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 16.710219002s
Jan 25 11:23:09.221: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 18.717438218s
Jan 25 11:23:11.230: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 20.726734847s
Jan 25 11:23:13.267: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 22.763611508s
Jan 25 11:23:15.274: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 24.77074108s
Jan 25 11:23:17.288: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 26.784086538s
Jan 25 11:23:19.298: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 28.794466052s
Jan 25 11:23:21.310: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Running", Reason="", readiness=true. Elapsed: 30.806268678s
Jan 25 11:23:23.316: INFO: Pod "pod-subpath-test-configmap-2lxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.81213287s
STEP: Saw pod success
Jan 25 11:23:23.316: INFO: Pod "pod-subpath-test-configmap-2lxb" satisfied condition "success or failure"
Jan 25 11:23:23.319: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-2lxb container test-container-subpath-configmap-2lxb: 
STEP: delete the pod
Jan 25 11:23:23.368: INFO: Waiting for pod pod-subpath-test-configmap-2lxb to disappear
Jan 25 11:23:23.435: INFO: Pod pod-subpath-test-configmap-2lxb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2lxb
Jan 25 11:23:23.435: INFO: Deleting pod "pod-subpath-test-configmap-2lxb" in namespace "subpath-5263"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:23:23.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5263" for this suite.

• [SLOW TEST:33.197 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":279,"completed":227,"skipped":3887,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:23:23.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:23:24.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:23:26.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:23:28.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:23:30.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:23:32.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:23:34.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:23:36.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:23:39.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:23:39.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4751-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:23:40.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-91" for this suite.
STEP: Destroying namespace "webhook-91-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.670 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":279,"completed":228,"skipped":3896,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:23:41.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-309.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-309.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-309.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-309.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 11:23:55.615: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.620: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.624: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.627: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.640: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.643: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.646: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.648: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:23:55.654: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:00.676: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.688: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.697: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.704: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.723: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.727: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.731: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.735: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:00.760: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:05.661: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.669: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.673: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.683: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.686: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.689: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.693: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:05.700: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:10.668: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.674: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.679: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.684: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.697: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.701: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.704: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.708: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:10.715: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:15.672: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.684: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.693: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.701: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.731: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.739: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.744: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.749: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:15.758: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:20.663: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.668: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.673: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.678: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.693: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.699: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.703: INFO: Unable to read jessie_udp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.709: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local from pod dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187: the server could not find the requested resource (get pods dns-test-9673b260-31d9-4436-b001-e68cda183187)
Jan 25 11:24:20.720: INFO: Lookups using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local wheezy_udp@dns-test-service-2.dns-309.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-309.svc.cluster.local jessie_udp@dns-test-service-2.dns-309.svc.cluster.local jessie_tcp@dns-test-service-2.dns-309.svc.cluster.local]

Jan 25 11:24:25.735: INFO: DNS probes using dns-309/dns-test-9673b260-31d9-4436-b001-e68cda183187 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:24:26.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-309" for this suite.

• [SLOW TEST:45.029 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":279,"completed":229,"skipped":3903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:24:26.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:24:26.858: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:24:28.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:24:30.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:24:32.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:24:34.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:24:36.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:24:38.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548267, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548266, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:24:41.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:24:42.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8805" for this suite.
STEP: Destroying namespace "webhook-8805-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:16.499 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":279,"completed":230,"skipped":3925,"failed":0}
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:24:42.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-1026
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 11:24:42.825: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 25 11:24:43.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:45.059: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:47.058: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:49.242: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:51.064: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:53.062: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:55.061: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:57.061: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:24:59.066: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:01.056: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:03.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:05.061: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:07.060: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:09.065: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:25:11.056: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 25 11:25:11.061: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 25 11:25:25.179: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1026 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:25:25.179: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:25:25.242094       9 log.go:172] (0xc0027f8000) (0xc001ff32c0) Create stream
I0125 11:25:25.242189       9 log.go:172] (0xc0027f8000) (0xc001ff32c0) Stream added, broadcasting: 1
I0125 11:25:25.247396       9 log.go:172] (0xc0027f8000) Reply frame received for 1
I0125 11:25:25.247453       9 log.go:172] (0xc0027f8000) (0xc00130a0a0) Create stream
I0125 11:25:25.247477       9 log.go:172] (0xc0027f8000) (0xc00130a0a0) Stream added, broadcasting: 3
I0125 11:25:25.249631       9 log.go:172] (0xc0027f8000) Reply frame received for 3
I0125 11:25:25.249697       9 log.go:172] (0xc0027f8000) (0xc0015c4820) Create stream
I0125 11:25:25.249708       9 log.go:172] (0xc0027f8000) (0xc0015c4820) Stream added, broadcasting: 5
I0125 11:25:25.251947       9 log.go:172] (0xc0027f8000) Reply frame received for 5
I0125 11:25:26.333486       9 log.go:172] (0xc0027f8000) Data frame received for 3
I0125 11:25:26.333577       9 log.go:172] (0xc00130a0a0) (3) Data frame handling
I0125 11:25:26.333602       9 log.go:172] (0xc00130a0a0) (3) Data frame sent
I0125 11:25:26.430992       9 log.go:172] (0xc0027f8000) Data frame received for 1
I0125 11:25:26.431159       9 log.go:172] (0xc0027f8000) (0xc0015c4820) Stream removed, broadcasting: 5
I0125 11:25:26.431247       9 log.go:172] (0xc001ff32c0) (1) Data frame handling
I0125 11:25:26.431286       9 log.go:172] (0xc001ff32c0) (1) Data frame sent
I0125 11:25:26.431657       9 log.go:172] (0xc0027f8000) (0xc00130a0a0) Stream removed, broadcasting: 3
I0125 11:25:26.431701       9 log.go:172] (0xc0027f8000) (0xc001ff32c0) Stream removed, broadcasting: 1
I0125 11:25:26.431737       9 log.go:172] (0xc0027f8000) Go away received
I0125 11:25:26.432313       9 log.go:172] (0xc0027f8000) (0xc001ff32c0) Stream removed, broadcasting: 1
I0125 11:25:26.432335       9 log.go:172] (0xc0027f8000) (0xc00130a0a0) Stream removed, broadcasting: 3
I0125 11:25:26.432342       9 log.go:172] (0xc0027f8000) (0xc0015c4820) Stream removed, broadcasting: 5
Jan 25 11:25:26.432: INFO: Found all expected endpoints: [netserver-0]
Jan 25 11:25:26.440: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1026 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:25:26.440: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:25:26.501412       9 log.go:172] (0xc001d3e9a0) (0xc0015c4a00) Create stream
I0125 11:25:26.501631       9 log.go:172] (0xc001d3e9a0) (0xc0015c4a00) Stream added, broadcasting: 1
I0125 11:25:26.511519       9 log.go:172] (0xc001d3e9a0) Reply frame received for 1
I0125 11:25:26.511686       9 log.go:172] (0xc001d3e9a0) (0xc000422f00) Create stream
I0125 11:25:26.511707       9 log.go:172] (0xc001d3e9a0) (0xc000422f00) Stream added, broadcasting: 3
I0125 11:25:26.513636       9 log.go:172] (0xc001d3e9a0) Reply frame received for 3
I0125 11:25:26.513687       9 log.go:172] (0xc001d3e9a0) (0xc0015c4aa0) Create stream
I0125 11:25:26.513710       9 log.go:172] (0xc001d3e9a0) (0xc0015c4aa0) Stream added, broadcasting: 5
I0125 11:25:26.515050       9 log.go:172] (0xc001d3e9a0) Reply frame received for 5
I0125 11:25:27.641504       9 log.go:172] (0xc001d3e9a0) Data frame received for 3
I0125 11:25:27.641579       9 log.go:172] (0xc000422f00) (3) Data frame handling
I0125 11:25:27.641605       9 log.go:172] (0xc000422f00) (3) Data frame sent
I0125 11:25:27.738262       9 log.go:172] (0xc001d3e9a0) (0xc000422f00) Stream removed, broadcasting: 3
I0125 11:25:27.739113       9 log.go:172] (0xc001d3e9a0) Data frame received for 1
I0125 11:25:27.739517       9 log.go:172] (0xc001d3e9a0) (0xc0015c4aa0) Stream removed, broadcasting: 5
I0125 11:25:27.739983       9 log.go:172] (0xc0015c4a00) (1) Data frame handling
I0125 11:25:27.740291       9 log.go:172] (0xc0015c4a00) (1) Data frame sent
I0125 11:25:27.740424       9 log.go:172] (0xc001d3e9a0) (0xc0015c4a00) Stream removed, broadcasting: 1
I0125 11:25:27.741332       9 log.go:172] (0xc001d3e9a0) (0xc0015c4a00) Stream removed, broadcasting: 1
I0125 11:25:27.741400       9 log.go:172] (0xc001d3e9a0) (0xc000422f00) Stream removed, broadcasting: 3
I0125 11:25:27.741424       9 log.go:172] (0xc001d3e9a0) (0xc0015c4aa0) Stream removed, broadcasting: 5
Jan 25 11:25:27.741: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:25:27.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0125 11:25:27.745468       9 log.go:172] (0xc001d3e9a0) Go away received
STEP: Destroying namespace "pod-network-test-1026" for this suite.

• [SLOW TEST:45.120 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":231,"skipped":3930,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:25:27.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:25:28.058: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 25 11:25:28.075: INFO: Number of nodes with available pods: 0
Jan 25 11:25:28.075: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 25 11:25:28.105: INFO: Number of nodes with available pods: 0
Jan 25 11:25:28.105: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:29.720: INFO: Number of nodes with available pods: 0
Jan 25 11:25:29.721: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:30.218: INFO: Number of nodes with available pods: 0
Jan 25 11:25:30.218: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:31.112: INFO: Number of nodes with available pods: 0
Jan 25 11:25:31.113: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:32.998: INFO: Number of nodes with available pods: 0
Jan 25 11:25:32.998: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:33.297: INFO: Number of nodes with available pods: 0
Jan 25 11:25:33.298: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:34.285: INFO: Number of nodes with available pods: 0
Jan 25 11:25:34.285: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:35.152: INFO: Number of nodes with available pods: 0
Jan 25 11:25:35.153: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:36.114: INFO: Number of nodes with available pods: 0
Jan 25 11:25:36.114: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:37.128: INFO: Number of nodes with available pods: 1
Jan 25 11:25:37.128: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 25 11:25:37.259: INFO: Number of nodes with available pods: 1
Jan 25 11:25:37.259: INFO: Number of running nodes: 0, number of available pods: 1
Jan 25 11:25:39.192: INFO: Number of nodes with available pods: 0
Jan 25 11:25:39.193: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 25 11:25:39.260: INFO: Number of nodes with available pods: 0
Jan 25 11:25:39.260: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:40.267: INFO: Number of nodes with available pods: 0
Jan 25 11:25:40.267: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:41.270: INFO: Number of nodes with available pods: 0
Jan 25 11:25:41.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:42.267: INFO: Number of nodes with available pods: 0
Jan 25 11:25:42.268: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:43.660: INFO: Number of nodes with available pods: 0
Jan 25 11:25:43.660: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:44.271: INFO: Number of nodes with available pods: 0
Jan 25 11:25:44.272: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:45.313: INFO: Number of nodes with available pods: 0
Jan 25 11:25:45.313: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:46.274: INFO: Number of nodes with available pods: 0
Jan 25 11:25:46.274: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:47.268: INFO: Number of nodes with available pods: 0
Jan 25 11:25:47.268: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:48.266: INFO: Number of nodes with available pods: 0
Jan 25 11:25:48.266: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:49.268: INFO: Number of nodes with available pods: 0
Jan 25 11:25:49.268: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:50.271: INFO: Number of nodes with available pods: 0
Jan 25 11:25:50.272: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:51.272: INFO: Number of nodes with available pods: 0
Jan 25 11:25:51.272: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:52.267: INFO: Number of nodes with available pods: 0
Jan 25 11:25:52.267: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:53.277: INFO: Number of nodes with available pods: 0
Jan 25 11:25:53.277: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:54.337: INFO: Number of nodes with available pods: 0
Jan 25 11:25:54.337: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:55.270: INFO: Number of nodes with available pods: 0
Jan 25 11:25:55.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:56.269: INFO: Number of nodes with available pods: 0
Jan 25 11:25:56.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:57.933: INFO: Number of nodes with available pods: 0
Jan 25 11:25:57.933: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:58.351: INFO: Number of nodes with available pods: 0
Jan 25 11:25:58.351: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:25:59.414: INFO: Number of nodes with available pods: 0
Jan 25 11:25:59.414: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:26:00.271: INFO: Number of nodes with available pods: 0
Jan 25 11:26:00.271: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 11:26:01.273: INFO: Number of nodes with available pods: 1
Jan 25 11:26:01.273: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-795, will wait for the garbage collector to delete the pods
Jan 25 11:26:01.351: INFO: Deleting DaemonSet.extensions daemon-set took: 10.433883ms
Jan 25 11:26:01.652: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.525612ms
Jan 25 11:26:07.983: INFO: Number of nodes with available pods: 0
Jan 25 11:26:07.983: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 11:26:07.987: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-795/daemonsets","resourceVersion":"4236595"},"items":null}

Jan 25 11:26:07.989: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-795/pods","resourceVersion":"4236595"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:26:08.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-795" for this suite.

• [SLOW TEST:40.294 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":279,"completed":232,"skipped":3932,"failed":0}
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:26:08.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:26:08.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-5134
I0125 11:26:08.377638       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5134, replica count: 1
I0125 11:26:09.429740       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:10.430264       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:11.431048       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:12.431788       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:13.432876       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:14.433890       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:15.434945       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:16.435977       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:26:17.436789       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 11:26:17.611: INFO: Created: latency-svc-vrg52
Jan 25 11:26:17.619: INFO: Got endpoints: latency-svc-vrg52 [81.760067ms]
Jan 25 11:26:17.721: INFO: Created: latency-svc-gg2vt
Jan 25 11:26:17.732: INFO: Got endpoints: latency-svc-gg2vt [112.272625ms]
Jan 25 11:26:17.759: INFO: Created: latency-svc-l2xcz
Jan 25 11:26:17.763: INFO: Got endpoints: latency-svc-l2xcz [142.578555ms]
Jan 25 11:26:17.787: INFO: Created: latency-svc-xwd4n
Jan 25 11:26:17.994: INFO: Got endpoints: latency-svc-xwd4n [373.215545ms]
Jan 25 11:26:17.997: INFO: Created: latency-svc-l5snl
Jan 25 11:26:18.004: INFO: Got endpoints: latency-svc-l5snl [382.277017ms]
Jan 25 11:26:18.041: INFO: Created: latency-svc-r2qsj
Jan 25 11:26:18.052: INFO: Got endpoints: latency-svc-r2qsj [429.86821ms]
Jan 25 11:26:18.071: INFO: Created: latency-svc-fp6hm
Jan 25 11:26:18.081: INFO: Got endpoints: latency-svc-fp6hm [458.286002ms]
Jan 25 11:26:18.148: INFO: Created: latency-svc-9th9v
Jan 25 11:26:18.150: INFO: Got endpoints: latency-svc-9th9v [528.956956ms]
Jan 25 11:26:18.179: INFO: Created: latency-svc-cnnsc
Jan 25 11:26:18.181: INFO: Got endpoints: latency-svc-cnnsc [559.269121ms]
Jan 25 11:26:18.198: INFO: Created: latency-svc-kppx9
Jan 25 11:26:18.203: INFO: Got endpoints: latency-svc-kppx9 [579.81237ms]
Jan 25 11:26:18.225: INFO: Created: latency-svc-h5szh
Jan 25 11:26:18.284: INFO: Got endpoints: latency-svc-h5szh [661.577041ms]
Jan 25 11:26:18.287: INFO: Created: latency-svc-d24rd
Jan 25 11:26:18.295: INFO: Got endpoints: latency-svc-d24rd [672.676807ms]
Jan 25 11:26:18.370: INFO: Created: latency-svc-hjpk4
Jan 25 11:26:18.377: INFO: Got endpoints: latency-svc-hjpk4 [753.841065ms]
Jan 25 11:26:18.445: INFO: Created: latency-svc-5wrc8
Jan 25 11:26:18.480: INFO: Got endpoints: latency-svc-5wrc8 [856.308856ms]
Jan 25 11:26:18.511: INFO: Created: latency-svc-4bgr7
Jan 25 11:26:18.512: INFO: Got endpoints: latency-svc-4bgr7 [889.51497ms]
Jan 25 11:26:18.536: INFO: Created: latency-svc-rwfxz
Jan 25 11:26:18.542: INFO: Got endpoints: latency-svc-rwfxz [919.700802ms]
Jan 25 11:26:18.584: INFO: Created: latency-svc-l5jdg
Jan 25 11:26:18.599: INFO: Got endpoints: latency-svc-l5jdg [866.77568ms]
Jan 25 11:26:18.619: INFO: Created: latency-svc-qqv5c
Jan 25 11:26:18.649: INFO: Got endpoints: latency-svc-qqv5c [884.918802ms]
Jan 25 11:26:18.649: INFO: Created: latency-svc-4cbdp
Jan 25 11:26:18.657: INFO: Got endpoints: latency-svc-4cbdp [662.646825ms]
Jan 25 11:26:18.796: INFO: Created: latency-svc-z7hrp
Jan 25 11:26:18.875: INFO: Got endpoints: latency-svc-z7hrp [870.407436ms]
Jan 25 11:26:18.954: INFO: Created: latency-svc-7gbl9
Jan 25 11:26:18.963: INFO: Got endpoints: latency-svc-7gbl9 [910.895992ms]
Jan 25 11:26:18.999: INFO: Created: latency-svc-2xhmz
Jan 25 11:26:19.002: INFO: Got endpoints: latency-svc-2xhmz [921.196638ms]
Jan 25 11:26:19.033: INFO: Created: latency-svc-2qzz5
Jan 25 11:26:19.132: INFO: Created: latency-svc-6zq65
Jan 25 11:26:19.132: INFO: Got endpoints: latency-svc-2qzz5 [981.548849ms]
Jan 25 11:26:19.136: INFO: Got endpoints: latency-svc-6zq65 [954.666059ms]
Jan 25 11:26:19.172: INFO: Created: latency-svc-fxgqt
Jan 25 11:26:19.176: INFO: Got endpoints: latency-svc-fxgqt [973.472523ms]
Jan 25 11:26:19.297: INFO: Created: latency-svc-hw7h2
Jan 25 11:26:19.307: INFO: Got endpoints: latency-svc-hw7h2 [1.023197795s]
Jan 25 11:26:19.349: INFO: Created: latency-svc-mwvsk
Jan 25 11:26:19.363: INFO: Got endpoints: latency-svc-mwvsk [1.068387204s]
Jan 25 11:26:19.477: INFO: Created: latency-svc-gfpt8
Jan 25 11:26:19.519: INFO: Got endpoints: latency-svc-gfpt8 [1.141275406s]
Jan 25 11:26:19.617: INFO: Created: latency-svc-5xw9g
Jan 25 11:26:19.620: INFO: Got endpoints: latency-svc-5xw9g [1.139407434s]
Jan 25 11:26:19.660: INFO: Created: latency-svc-m7pnh
Jan 25 11:26:19.668: INFO: Got endpoints: latency-svc-m7pnh [1.155178125s]
Jan 25 11:26:19.684: INFO: Created: latency-svc-xj7fm
Jan 25 11:26:19.699: INFO: Got endpoints: latency-svc-xj7fm [1.157361799s]
Jan 25 11:26:19.799: INFO: Created: latency-svc-xf9kp
Jan 25 11:26:19.822: INFO: Got endpoints: latency-svc-xf9kp [1.222698914s]
Jan 25 11:26:19.987: INFO: Created: latency-svc-pgtwm
Jan 25 11:26:19.987: INFO: Got endpoints: latency-svc-pgtwm [1.338231483s]
Jan 25 11:26:20.004: INFO: Created: latency-svc-cwcgr
Jan 25 11:26:20.013: INFO: Got endpoints: latency-svc-cwcgr [1.355365381s]
Jan 25 11:26:20.058: INFO: Created: latency-svc-bcvlb
Jan 25 11:26:20.065: INFO: Got endpoints: latency-svc-bcvlb [1.189369798s]
Jan 25 11:26:20.129: INFO: Created: latency-svc-b7k95
Jan 25 11:26:20.129: INFO: Got endpoints: latency-svc-b7k95 [1.166074667s]
Jan 25 11:26:20.158: INFO: Created: latency-svc-txq6v
Jan 25 11:26:20.162: INFO: Got endpoints: latency-svc-txq6v [1.159677757s]
Jan 25 11:26:20.207: INFO: Created: latency-svc-gltnj
Jan 25 11:26:20.213: INFO: Got endpoints: latency-svc-gltnj [1.081134892s]
Jan 25 11:26:20.279: INFO: Created: latency-svc-knsvc
Jan 25 11:26:20.288: INFO: Got endpoints: latency-svc-knsvc [1.152569104s]
Jan 25 11:26:20.367: INFO: Created: latency-svc-29bt4
Jan 25 11:26:20.374: INFO: Got endpoints: latency-svc-29bt4 [1.197535877s]
Jan 25 11:26:20.426: INFO: Created: latency-svc-w6fc6
Jan 25 11:26:20.456: INFO: Got endpoints: latency-svc-w6fc6 [1.148236371s]
Jan 25 11:26:20.461: INFO: Created: latency-svc-d55j9
Jan 25 11:26:20.468: INFO: Got endpoints: latency-svc-d55j9 [1.104203556s]
Jan 25 11:26:20.497: INFO: Created: latency-svc-ms6rv
Jan 25 11:26:20.502: INFO: Got endpoints: latency-svc-ms6rv [983.683204ms]
Jan 25 11:26:20.567: INFO: Created: latency-svc-chwc2
Jan 25 11:26:20.584: INFO: Got endpoints: latency-svc-chwc2 [963.998305ms]
Jan 25 11:26:20.589: INFO: Created: latency-svc-87f5p
Jan 25 11:26:20.599: INFO: Got endpoints: latency-svc-87f5p [930.933497ms]
Jan 25 11:26:20.626: INFO: Created: latency-svc-vbrzp
Jan 25 11:26:20.650: INFO: Got endpoints: latency-svc-vbrzp [950.921195ms]
Jan 25 11:26:20.782: INFO: Created: latency-svc-f2wfv
Jan 25 11:26:20.796: INFO: Got endpoints: latency-svc-f2wfv [974.067921ms]
Jan 25 11:26:20.833: INFO: Created: latency-svc-sd7gx
Jan 25 11:26:20.852: INFO: Got endpoints: latency-svc-sd7gx [864.725214ms]
Jan 25 11:26:20.869: INFO: Created: latency-svc-t9sw5
Jan 25 11:26:20.975: INFO: Got endpoints: latency-svc-t9sw5 [961.960138ms]
Jan 25 11:26:21.019: INFO: Created: latency-svc-jz562
Jan 25 11:26:21.045: INFO: Got endpoints: latency-svc-jz562 [980.147526ms]
Jan 25 11:26:21.046: INFO: Created: latency-svc-6n6zk
Jan 25 11:26:21.133: INFO: Got endpoints: latency-svc-6n6zk [1.003644287s]
Jan 25 11:26:21.140: INFO: Created: latency-svc-xbl64
Jan 25 11:26:21.161: INFO: Got endpoints: latency-svc-xbl64 [998.928667ms]
Jan 25 11:26:21.224: INFO: Created: latency-svc-npwvj
Jan 25 11:26:21.430: INFO: Got endpoints: latency-svc-npwvj [1.216823536s]
Jan 25 11:26:21.448: INFO: Created: latency-svc-dccds
Jan 25 11:26:21.463: INFO: Got endpoints: latency-svc-dccds [1.173887407s]
Jan 25 11:26:21.475: INFO: Created: latency-svc-t68lh
Jan 25 11:26:21.511: INFO: Got endpoints: latency-svc-t68lh [1.136607772s]
Jan 25 11:26:21.688: INFO: Created: latency-svc-v9qc7
Jan 25 11:26:21.688: INFO: Got endpoints: latency-svc-v9qc7 [1.232536969s]
Jan 25 11:26:21.745: INFO: Created: latency-svc-h56dv
Jan 25 11:26:21.771: INFO: Got endpoints: latency-svc-h56dv [1.302731935s]
Jan 25 11:26:21.777: INFO: Created: latency-svc-v4bb8
Jan 25 11:26:21.783: INFO: Got endpoints: latency-svc-v4bb8 [1.279768676s]
Jan 25 11:26:21.857: INFO: Created: latency-svc-qvhcw
Jan 25 11:26:21.872: INFO: Got endpoints: latency-svc-qvhcw [1.287381516s]
Jan 25 11:26:21.885: INFO: Created: latency-svc-lxh9v
Jan 25 11:26:21.896: INFO: Got endpoints: latency-svc-lxh9v [1.297138491s]
Jan 25 11:26:22.046: INFO: Created: latency-svc-ndzf7
Jan 25 11:26:22.088: INFO: Got endpoints: latency-svc-ndzf7 [1.437549181s]
Jan 25 11:26:22.092: INFO: Created: latency-svc-2ds95
Jan 25 11:26:22.097: INFO: Got endpoints: latency-svc-2ds95 [1.30029059s]
Jan 25 11:26:22.131: INFO: Created: latency-svc-fmzs9
Jan 25 11:26:22.136: INFO: Got endpoints: latency-svc-fmzs9 [1.28323265s]
Jan 25 11:26:22.199: INFO: Created: latency-svc-rw7fv
Jan 25 11:26:22.208: INFO: Got endpoints: latency-svc-rw7fv [1.23334056s]
Jan 25 11:26:22.248: INFO: Created: latency-svc-n5vm9
Jan 25 11:26:22.255: INFO: Got endpoints: latency-svc-n5vm9 [1.210045638s]
Jan 25 11:26:22.282: INFO: Created: latency-svc-8l9bt
Jan 25 11:26:22.290: INFO: Got endpoints: latency-svc-8l9bt [1.156374822s]
Jan 25 11:26:22.511: INFO: Created: latency-svc-8tc5n
Jan 25 11:26:22.518: INFO: Got endpoints: latency-svc-8tc5n [1.357727132s]
Jan 25 11:26:22.552: INFO: Created: latency-svc-fndkn
Jan 25 11:26:22.569: INFO: Got endpoints: latency-svc-fndkn [1.138833916s]
Jan 25 11:26:22.598: INFO: Created: latency-svc-tvfg7
Jan 25 11:26:22.664: INFO: Got endpoints: latency-svc-tvfg7 [1.201520484s]
Jan 25 11:26:22.672: INFO: Created: latency-svc-zvwjh
Jan 25 11:26:22.692: INFO: Got endpoints: latency-svc-zvwjh [1.180586353s]
Jan 25 11:26:22.719: INFO: Created: latency-svc-4g7rt
Jan 25 11:26:22.723: INFO: Got endpoints: latency-svc-4g7rt [1.033940291s]
Jan 25 11:26:22.742: INFO: Created: latency-svc-vm5fj
Jan 25 11:26:22.752: INFO: Got endpoints: latency-svc-vm5fj [981.06536ms]
Jan 25 11:26:22.959: INFO: Created: latency-svc-t7mzt
Jan 25 11:26:22.984: INFO: Got endpoints: latency-svc-t7mzt [1.201284006s]
Jan 25 11:26:23.125: INFO: Created: latency-svc-8djws
Jan 25 11:26:23.137: INFO: Got endpoints: latency-svc-8djws [1.265242026s]
Jan 25 11:26:23.163: INFO: Created: latency-svc-4tfmt
Jan 25 11:26:23.193: INFO: Got endpoints: latency-svc-4tfmt [1.296739061s]
Jan 25 11:26:23.332: INFO: Created: latency-svc-92q2v
Jan 25 11:26:23.336: INFO: Got endpoints: latency-svc-92q2v [1.247354319s]
Jan 25 11:26:23.418: INFO: Created: latency-svc-kj22x
Jan 25 11:26:23.429: INFO: Got endpoints: latency-svc-kj22x [1.332310224s]
Jan 25 11:26:23.567: INFO: Created: latency-svc-fp9mx
Jan 25 11:26:23.640: INFO: Got endpoints: latency-svc-fp9mx [1.504454665s]
Jan 25 11:26:23.742: INFO: Created: latency-svc-k47nv
Jan 25 11:26:23.759: INFO: Got endpoints: latency-svc-k47nv [1.550861334s]
Jan 25 11:26:23.823: INFO: Created: latency-svc-xmjqn
Jan 25 11:26:23.827: INFO: Got endpoints: latency-svc-xmjqn [1.571655793s]
Jan 25 11:26:24.047: INFO: Created: latency-svc-7h54b
Jan 25 11:26:24.059: INFO: Got endpoints: latency-svc-7h54b [1.769295745s]
Jan 25 11:26:24.126: INFO: Created: latency-svc-knpbt
Jan 25 11:26:24.145: INFO: Got endpoints: latency-svc-knpbt [1.626291504s]
Jan 25 11:26:24.278: INFO: Created: latency-svc-q67jk
Jan 25 11:26:24.322: INFO: Created: latency-svc-wzbpj
Jan 25 11:26:24.323: INFO: Got endpoints: latency-svc-q67jk [1.753127086s]
Jan 25 11:26:24.339: INFO: Got endpoints: latency-svc-wzbpj [1.673951278s]
Jan 25 11:26:24.519: INFO: Created: latency-svc-n8597
Jan 25 11:26:24.523: INFO: Got endpoints: latency-svc-n8597 [1.830668117s]
Jan 25 11:26:24.586: INFO: Created: latency-svc-6qg42
Jan 25 11:26:24.598: INFO: Got endpoints: latency-svc-6qg42 [1.875698418s]
Jan 25 11:26:24.714: INFO: Created: latency-svc-dqxtv
Jan 25 11:26:24.725: INFO: Got endpoints: latency-svc-dqxtv [1.971992603s]
Jan 25 11:26:24.892: INFO: Created: latency-svc-xxtlf
Jan 25 11:26:24.895: INFO: Got endpoints: latency-svc-xxtlf [1.910354405s]
Jan 25 11:26:24.964: INFO: Created: latency-svc-2zblc
Jan 25 11:26:24.968: INFO: Got endpoints: latency-svc-2zblc [1.830137559s]
Jan 25 11:26:25.079: INFO: Created: latency-svc-4wzsl
Jan 25 11:26:25.083: INFO: Got endpoints: latency-svc-4wzsl [1.889804642s]
Jan 25 11:26:25.103: INFO: Created: latency-svc-gp5ph
Jan 25 11:26:25.114: INFO: Got endpoints: latency-svc-gp5ph [1.777758628s]
Jan 25 11:26:25.147: INFO: Created: latency-svc-zb7mx
Jan 25 11:26:25.160: INFO: Got endpoints: latency-svc-zb7mx [1.730486635s]
Jan 25 11:26:25.298: INFO: Created: latency-svc-b6p7d
Jan 25 11:26:25.314: INFO: Got endpoints: latency-svc-b6p7d [1.673785549s]
Jan 25 11:26:25.358: INFO: Created: latency-svc-ldz6w
Jan 25 11:26:25.366: INFO: Got endpoints: latency-svc-ldz6w [1.606990547s]
Jan 25 11:26:25.534: INFO: Created: latency-svc-fzh62
Jan 25 11:26:25.576: INFO: Got endpoints: latency-svc-fzh62 [1.748968531s]
Jan 25 11:26:25.625: INFO: Created: latency-svc-r8w4p
Jan 25 11:26:25.720: INFO: Got endpoints: latency-svc-r8w4p [1.659918185s]
Jan 25 11:26:25.743: INFO: Created: latency-svc-bqrdh
Jan 25 11:26:25.756: INFO: Got endpoints: latency-svc-bqrdh [1.61036759s]
Jan 25 11:26:25.806: INFO: Created: latency-svc-w7b77
Jan 25 11:26:25.956: INFO: Got endpoints: latency-svc-w7b77 [1.633272623s]
Jan 25 11:26:26.005: INFO: Created: latency-svc-hh5gr
Jan 25 11:26:26.011: INFO: Got endpoints: latency-svc-hh5gr [1.671709158s]
Jan 25 11:26:26.035: INFO: Created: latency-svc-kb7mk
Jan 25 11:26:26.041: INFO: Got endpoints: latency-svc-kb7mk [1.518243457s]
Jan 25 11:26:26.158: INFO: Created: latency-svc-q6xdz
Jan 25 11:26:26.176: INFO: Got endpoints: latency-svc-q6xdz [1.577573458s]
Jan 25 11:26:26.200: INFO: Created: latency-svc-rr9w7
Jan 25 11:26:26.210: INFO: Got endpoints: latency-svc-rr9w7 [1.485076018s]
Jan 25 11:26:26.425: INFO: Created: latency-svc-spl24
Jan 25 11:26:26.482: INFO: Got endpoints: latency-svc-spl24 [1.58684667s]
Jan 25 11:26:26.488: INFO: Created: latency-svc-dmdz8
Jan 25 11:26:26.494: INFO: Got endpoints: latency-svc-dmdz8 [1.52585407s]
Jan 25 11:26:26.645: INFO: Created: latency-svc-lcjr8
Jan 25 11:26:26.655: INFO: Got endpoints: latency-svc-lcjr8 [1.572236601s]
Jan 25 11:26:26.691: INFO: Created: latency-svc-xcn2n
Jan 25 11:26:26.701: INFO: Got endpoints: latency-svc-xcn2n [1.587470824s]
Jan 25 11:26:26.805: INFO: Created: latency-svc-5q7lt
Jan 25 11:26:26.973: INFO: Created: latency-svc-lbxrr
Jan 25 11:26:26.974: INFO: Got endpoints: latency-svc-5q7lt [1.81356309s]
Jan 25 11:26:27.024: INFO: Got endpoints: latency-svc-lbxrr [1.709369709s]
Jan 25 11:26:27.026: INFO: Created: latency-svc-xb7hp
Jan 25 11:26:27.071: INFO: Got endpoints: latency-svc-xb7hp [1.704286771s]
Jan 25 11:26:27.213: INFO: Created: latency-svc-2h7zq
Jan 25 11:26:27.230: INFO: Got endpoints: latency-svc-2h7zq [1.653927216s]
Jan 25 11:26:27.303: INFO: Created: latency-svc-cd262
Jan 25 11:26:27.310: INFO: Got endpoints: latency-svc-cd262 [1.589758199s]
Jan 25 11:26:27.403: INFO: Created: latency-svc-jq5fq
Jan 25 11:26:27.412: INFO: Got endpoints: latency-svc-jq5fq [1.655174649s]
Jan 25 11:26:27.485: INFO: Created: latency-svc-zrfqq
Jan 25 11:26:27.607: INFO: Got endpoints: latency-svc-zrfqq [1.649893406s]
Jan 25 11:26:27.629: INFO: Created: latency-svc-72x8s
Jan 25 11:26:27.640: INFO: Got endpoints: latency-svc-72x8s [1.628684508s]
Jan 25 11:26:27.842: INFO: Created: latency-svc-l4wpz
Jan 25 11:26:27.887: INFO: Got endpoints: latency-svc-l4wpz [1.845207798s]
Jan 25 11:26:28.116: INFO: Created: latency-svc-szccs
Jan 25 11:26:28.128: INFO: Got endpoints: latency-svc-szccs [1.952076969s]
Jan 25 11:26:28.180: INFO: Created: latency-svc-jdvlv
Jan 25 11:26:28.182: INFO: Got endpoints: latency-svc-jdvlv [1.971930445s]
Jan 25 11:26:28.289: INFO: Created: latency-svc-ftcc5
Jan 25 11:26:28.294: INFO: Got endpoints: latency-svc-ftcc5 [1.811623163s]
Jan 25 11:26:28.349: INFO: Created: latency-svc-jpm9c
Jan 25 11:26:28.364: INFO: Got endpoints: latency-svc-jpm9c [1.869628663s]
Jan 25 11:26:28.494: INFO: Created: latency-svc-2drvj
Jan 25 11:26:28.515: INFO: Got endpoints: latency-svc-2drvj [1.859593272s]
Jan 25 11:26:28.541: INFO: Created: latency-svc-fj9nz
Jan 25 11:26:28.553: INFO: Got endpoints: latency-svc-fj9nz [1.850901017s]
Jan 25 11:26:28.664: INFO: Created: latency-svc-5kpfr
Jan 25 11:26:28.674: INFO: Got endpoints: latency-svc-5kpfr [1.700714626s]
Jan 25 11:26:28.687: INFO: Created: latency-svc-b9nd7
Jan 25 11:26:28.718: INFO: Got endpoints: latency-svc-b9nd7 [1.693617936s]
Jan 25 11:26:28.728: INFO: Created: latency-svc-2dwcg
Jan 25 11:26:28.747: INFO: Got endpoints: latency-svc-2dwcg [1.675582894s]
Jan 25 11:26:28.754: INFO: Created: latency-svc-vwvdk
Jan 25 11:26:28.811: INFO: Got endpoints: latency-svc-vwvdk [1.580474163s]
Jan 25 11:26:28.817: INFO: Created: latency-svc-j7r2f
Jan 25 11:26:28.825: INFO: Got endpoints: latency-svc-j7r2f [1.515478452s]
Jan 25 11:26:28.856: INFO: Created: latency-svc-svm9l
Jan 25 11:26:28.880: INFO: Created: latency-svc-vcdff
Jan 25 11:26:28.880: INFO: Got endpoints: latency-svc-svm9l [1.468678174s]
Jan 25 11:26:28.887: INFO: Got endpoints: latency-svc-vcdff [1.279853912s]
Jan 25 11:26:29.004: INFO: Created: latency-svc-vdmkg
Jan 25 11:26:29.041: INFO: Got endpoints: latency-svc-vdmkg [1.401105683s]
Jan 25 11:26:29.065: INFO: Created: latency-svc-w7j2s
Jan 25 11:26:29.088: INFO: Got endpoints: latency-svc-w7j2s [1.201292168s]
Jan 25 11:26:29.090: INFO: Created: latency-svc-qmpq7
Jan 25 11:26:29.151: INFO: Got endpoints: latency-svc-qmpq7 [1.021909211s]
Jan 25 11:26:29.156: INFO: Created: latency-svc-qvqzb
Jan 25 11:26:29.164: INFO: Got endpoints: latency-svc-qvqzb [981.521286ms]
Jan 25 11:26:29.192: INFO: Created: latency-svc-qs2g6
Jan 25 11:26:29.196: INFO: Got endpoints: latency-svc-qs2g6 [902.367517ms]
Jan 25 11:26:29.220: INFO: Created: latency-svc-jww6p
Jan 25 11:26:29.224: INFO: Got endpoints: latency-svc-jww6p [860.400257ms]
Jan 25 11:26:29.302: INFO: Created: latency-svc-ntjcd
Jan 25 11:26:29.467: INFO: Got endpoints: latency-svc-ntjcd [951.072821ms]
Jan 25 11:26:29.495: INFO: Created: latency-svc-t7mhr
Jan 25 11:26:29.503: INFO: Got endpoints: latency-svc-t7mhr [278.252614ms]
Jan 25 11:26:29.559: INFO: Created: latency-svc-6lfvr
Jan 25 11:26:29.620: INFO: Got endpoints: latency-svc-6lfvr [1.066621159s]
Jan 25 11:26:29.629: INFO: Created: latency-svc-fccww
Jan 25 11:26:29.663: INFO: Got endpoints: latency-svc-fccww [987.770093ms]
Jan 25 11:26:29.687: INFO: Created: latency-svc-ggp2z
Jan 25 11:26:29.697: INFO: Got endpoints: latency-svc-ggp2z [978.510109ms]
Jan 25 11:26:29.721: INFO: Created: latency-svc-5ht67
Jan 25 11:26:29.809: INFO: Got endpoints: latency-svc-5ht67 [1.062237637s]
Jan 25 11:26:29.824: INFO: Created: latency-svc-qrq5r
Jan 25 11:26:29.872: INFO: Got endpoints: latency-svc-qrq5r [1.060830569s]
Jan 25 11:26:29.992: INFO: Created: latency-svc-5bbgz
Jan 25 11:26:29.995: INFO: Got endpoints: latency-svc-5bbgz [1.170005349s]
Jan 25 11:26:30.022: INFO: Created: latency-svc-557pv
Jan 25 11:26:30.027: INFO: Got endpoints: latency-svc-557pv [1.146758635s]
Jan 25 11:26:30.067: INFO: Created: latency-svc-b9dmh
Jan 25 11:26:30.079: INFO: Got endpoints: latency-svc-b9dmh [1.19199522s]
Jan 25 11:26:30.082: INFO: Created: latency-svc-xlrvq
Jan 25 11:26:30.087: INFO: Got endpoints: latency-svc-xlrvq [1.045908907s]
Jan 25 11:26:30.162: INFO: Created: latency-svc-nsntv
Jan 25 11:26:30.166: INFO: Got endpoints: latency-svc-nsntv [1.077686728s]
Jan 25 11:26:30.186: INFO: Created: latency-svc-zsfwc
Jan 25 11:26:30.194: INFO: Got endpoints: latency-svc-zsfwc [1.043177443s]
Jan 25 11:26:30.221: INFO: Created: latency-svc-4bk5j
Jan 25 11:26:30.227: INFO: Got endpoints: latency-svc-4bk5j [1.062769844s]
Jan 25 11:26:30.373: INFO: Created: latency-svc-8trdm
Jan 25 11:26:30.373: INFO: Got endpoints: latency-svc-8trdm [1.176428102s]
Jan 25 11:26:30.409: INFO: Created: latency-svc-kjvpn
Jan 25 11:26:30.420: INFO: Got endpoints: latency-svc-kjvpn [953.011714ms]
Jan 25 11:26:30.568: INFO: Created: latency-svc-28jws
Jan 25 11:26:30.577: INFO: Got endpoints: latency-svc-28jws [1.073951883s]
Jan 25 11:26:30.635: INFO: Created: latency-svc-vrdd5
Jan 25 11:26:30.757: INFO: Got endpoints: latency-svc-vrdd5 [1.136724837s]
Jan 25 11:26:30.802: INFO: Created: latency-svc-n2l86
Jan 25 11:26:30.815: INFO: Got endpoints: latency-svc-n2l86 [1.152410749s]
Jan 25 11:26:30.912: INFO: Created: latency-svc-7mtsb
Jan 25 11:26:30.976: INFO: Created: latency-svc-sxn6b
Jan 25 11:26:30.976: INFO: Got endpoints: latency-svc-7mtsb [1.279649549s]
Jan 25 11:26:31.183: INFO: Got endpoints: latency-svc-sxn6b [1.374044119s]
Jan 25 11:26:31.195: INFO: Created: latency-svc-6rhbw
Jan 25 11:26:31.208: INFO: Got endpoints: latency-svc-6rhbw [1.335341573s]
Jan 25 11:26:31.270: INFO: Created: latency-svc-xg7lv
Jan 25 11:26:31.280: INFO: Got endpoints: latency-svc-xg7lv [1.284238732s]
Jan 25 11:26:31.442: INFO: Created: latency-svc-cjmn2
Jan 25 11:26:31.485: INFO: Got endpoints: latency-svc-cjmn2 [1.457855289s]
Jan 25 11:26:31.513: INFO: Created: latency-svc-9d2ct
Jan 25 11:26:31.518: INFO: Got endpoints: latency-svc-9d2ct [1.439334757s]
Jan 25 11:26:31.638: INFO: Created: latency-svc-cx5vf
Jan 25 11:26:31.646: INFO: Got endpoints: latency-svc-cx5vf [1.558246961s]
Jan 25 11:26:31.726: INFO: Created: latency-svc-9c8fz
Jan 25 11:26:31.869: INFO: Got endpoints: latency-svc-9c8fz [1.70249145s]
Jan 25 11:26:31.921: INFO: Created: latency-svc-gf2lz
Jan 25 11:26:31.940: INFO: Got endpoints: latency-svc-gf2lz [1.745974836s]
Jan 25 11:26:32.069: INFO: Created: latency-svc-w6z67
Jan 25 11:26:32.084: INFO: Got endpoints: latency-svc-w6z67 [1.856735781s]
Jan 25 11:26:32.138: INFO: Created: latency-svc-m4nxl
Jan 25 11:26:32.153: INFO: Got endpoints: latency-svc-m4nxl [1.779936915s]
Jan 25 11:26:32.291: INFO: Created: latency-svc-2gbfp
Jan 25 11:26:32.291: INFO: Got endpoints: latency-svc-2gbfp [1.870562245s]
Jan 25 11:26:32.323: INFO: Created: latency-svc-rh4t6
Jan 25 11:26:32.332: INFO: Got endpoints: latency-svc-rh4t6 [1.754187091s]
Jan 25 11:26:32.367: INFO: Created: latency-svc-8w268
Jan 25 11:26:32.370: INFO: Got endpoints: latency-svc-8w268 [1.613674647s]
Jan 25 11:26:32.485: INFO: Created: latency-svc-5r2f8
Jan 25 11:26:32.496: INFO: Got endpoints: latency-svc-5r2f8 [1.679965824s]
Jan 25 11:26:32.557: INFO: Created: latency-svc-229dm
Jan 25 11:26:32.718: INFO: Got endpoints: latency-svc-229dm [1.741001271s]
Jan 25 11:26:32.778: INFO: Created: latency-svc-8s2k9
Jan 25 11:26:32.786: INFO: Got endpoints: latency-svc-8s2k9 [1.60222114s]
Jan 25 11:26:32.940: INFO: Created: latency-svc-66m97
Jan 25 11:26:32.957: INFO: Got endpoints: latency-svc-66m97 [1.74850695s]
Jan 25 11:26:33.016: INFO: Created: latency-svc-bltqc
Jan 25 11:26:33.131: INFO: Got endpoints: latency-svc-bltqc [1.851376559s]
Jan 25 11:26:33.187: INFO: Created: latency-svc-m4pz5
Jan 25 11:26:33.193: INFO: Got endpoints: latency-svc-m4pz5 [1.707521062s]
Jan 25 11:26:33.222: INFO: Created: latency-svc-q47r5
Jan 25 11:26:33.633: INFO: Got endpoints: latency-svc-q47r5 [2.114919011s]
Jan 25 11:26:33.665: INFO: Created: latency-svc-jqk5g
Jan 25 11:26:33.673: INFO: Got endpoints: latency-svc-jqk5g [2.026584831s]
Jan 25 11:26:33.886: INFO: Created: latency-svc-bq8g9
Jan 25 11:26:33.928: INFO: Created: latency-svc-x6dgp
Jan 25 11:26:33.931: INFO: Got endpoints: latency-svc-bq8g9 [2.062531544s]
Jan 25 11:26:33.985: INFO: Got endpoints: latency-svc-x6dgp [2.044768198s]
Jan 25 11:26:34.086: INFO: Created: latency-svc-zw2c7
Jan 25 11:26:34.092: INFO: Got endpoints: latency-svc-zw2c7 [2.008140575s]
Jan 25 11:26:34.126: INFO: Created: latency-svc-lnh8r
Jan 25 11:26:34.141: INFO: Got endpoints: latency-svc-lnh8r [1.987349003s]
Jan 25 11:26:34.179: INFO: Created: latency-svc-mzwjj
Jan 25 11:26:34.256: INFO: Got endpoints: latency-svc-mzwjj [1.964981422s]
Jan 25 11:26:34.292: INFO: Created: latency-svc-85dcp
Jan 25 11:26:34.304: INFO: Got endpoints: latency-svc-85dcp [1.972330975s]
Jan 25 11:26:34.346: INFO: Created: latency-svc-l85vd
Jan 25 11:26:34.487: INFO: Got endpoints: latency-svc-l85vd [2.116612397s]
Jan 25 11:26:34.504: INFO: Created: latency-svc-z7dq8
Jan 25 11:26:34.548: INFO: Got endpoints: latency-svc-z7dq8 [2.052224523s]
Jan 25 11:26:34.710: INFO: Created: latency-svc-s4nwz
Jan 25 11:26:34.759: INFO: Created: latency-svc-nvhxs
Jan 25 11:26:34.759: INFO: Got endpoints: latency-svc-s4nwz [2.041415193s]
Jan 25 11:26:34.776: INFO: Got endpoints: latency-svc-nvhxs [1.989318461s]
Jan 25 11:26:34.938: INFO: Created: latency-svc-btkl5
Jan 25 11:26:34.957: INFO: Got endpoints: latency-svc-btkl5 [1.999689595s]
Jan 25 11:26:35.043: INFO: Created: latency-svc-7fghr
Jan 25 11:26:35.051: INFO: Got endpoints: latency-svc-7fghr [1.919772807s]
Jan 25 11:26:35.099: INFO: Created: latency-svc-7v86x
Jan 25 11:26:35.129: INFO: Got endpoints: latency-svc-7v86x [1.935422847s]
Jan 25 11:26:35.220: INFO: Created: latency-svc-t68qp
Jan 25 11:26:35.293: INFO: Got endpoints: latency-svc-t68qp [1.658944594s]
Jan 25 11:26:35.293: INFO: Created: latency-svc-52pq6
Jan 25 11:26:35.299: INFO: Got endpoints: latency-svc-52pq6 [1.625867016s]
Jan 25 11:26:35.420: INFO: Created: latency-svc-wnnpt
Jan 25 11:26:35.442: INFO: Got endpoints: latency-svc-wnnpt [1.510601458s]
Jan 25 11:26:35.489: INFO: Created: latency-svc-sj9gj
Jan 25 11:26:35.504: INFO: Got endpoints: latency-svc-sj9gj [1.518880587s]
Jan 25 11:26:35.600: INFO: Created: latency-svc-795gw
Jan 25 11:26:35.610: INFO: Got endpoints: latency-svc-795gw [1.51824209s]
Jan 25 11:26:35.641: INFO: Created: latency-svc-7ntnm
Jan 25 11:26:35.649: INFO: Got endpoints: latency-svc-7ntnm [1.507605839s]
Jan 25 11:26:35.684: INFO: Created: latency-svc-fdjj7
Jan 25 11:26:35.769: INFO: Got endpoints: latency-svc-fdjj7 [1.512138678s]
Jan 25 11:26:35.783: INFO: Created: latency-svc-dqrmk
Jan 25 11:26:35.807: INFO: Got endpoints: latency-svc-dqrmk [1.50268081s]
Jan 25 11:26:36.696: INFO: Created: latency-svc-jd8xr
Jan 25 11:26:36.715: INFO: Got endpoints: latency-svc-jd8xr [2.22773062s]
Jan 25 11:26:36.766: INFO: Created: latency-svc-ztjfk
Jan 25 11:26:36.784: INFO: Got endpoints: latency-svc-ztjfk [2.236003338s]
Jan 25 11:26:36.925: INFO: Created: latency-svc-8g68k
Jan 25 11:26:36.951: INFO: Got endpoints: latency-svc-8g68k [2.191850423s]
Jan 25 11:26:37.101: INFO: Created: latency-svc-vht2z
Jan 25 11:26:37.104: INFO: Got endpoints: latency-svc-vht2z [2.328136659s]
Jan 25 11:26:37.166: INFO: Created: latency-svc-fl962
Jan 25 11:26:37.183: INFO: Got endpoints: latency-svc-fl962 [2.226217038s]
Jan 25 11:26:37.183: INFO: Latencies: [112.272625ms 142.578555ms 278.252614ms 373.215545ms 382.277017ms 429.86821ms 458.286002ms 528.956956ms 559.269121ms 579.81237ms 661.577041ms 662.646825ms 672.676807ms 753.841065ms 856.308856ms 860.400257ms 864.725214ms 866.77568ms 870.407436ms 884.918802ms 889.51497ms 902.367517ms 910.895992ms 919.700802ms 921.196638ms 930.933497ms 950.921195ms 951.072821ms 953.011714ms 954.666059ms 961.960138ms 963.998305ms 973.472523ms 974.067921ms 978.510109ms 980.147526ms 981.06536ms 981.521286ms 981.548849ms 983.683204ms 987.770093ms 998.928667ms 1.003644287s 1.021909211s 1.023197795s 1.033940291s 1.043177443s 1.045908907s 1.060830569s 1.062237637s 1.062769844s 1.066621159s 1.068387204s 1.073951883s 1.077686728s 1.081134892s 1.104203556s 1.136607772s 1.136724837s 1.138833916s 1.139407434s 1.141275406s 1.146758635s 1.148236371s 1.152410749s 1.152569104s 1.155178125s 1.156374822s 1.157361799s 1.159677757s 1.166074667s 1.170005349s 1.173887407s 1.176428102s 1.180586353s 1.189369798s 1.19199522s 1.197535877s 1.201284006s 1.201292168s 1.201520484s 1.210045638s 1.216823536s 1.222698914s 1.232536969s 1.23334056s 1.247354319s 1.265242026s 1.279649549s 1.279768676s 1.279853912s 1.28323265s 1.284238732s 1.287381516s 1.296739061s 1.297138491s 1.30029059s 1.302731935s 1.332310224s 1.335341573s 1.338231483s 1.355365381s 1.357727132s 1.374044119s 1.401105683s 1.437549181s 1.439334757s 1.457855289s 1.468678174s 1.485076018s 1.50268081s 1.504454665s 1.507605839s 1.510601458s 1.512138678s 1.515478452s 1.51824209s 1.518243457s 1.518880587s 1.52585407s 1.550861334s 1.558246961s 1.571655793s 1.572236601s 1.577573458s 1.580474163s 1.58684667s 1.587470824s 1.589758199s 1.60222114s 1.606990547s 1.61036759s 1.613674647s 1.625867016s 1.626291504s 1.628684508s 1.633272623s 1.649893406s 1.653927216s 1.655174649s 1.658944594s 1.659918185s 1.671709158s 1.673785549s 1.673951278s 1.675582894s 1.679965824s 1.693617936s 1.700714626s 1.70249145s 1.704286771s 1.707521062s 1.709369709s 1.730486635s 1.741001271s 1.745974836s 1.74850695s 1.748968531s 1.753127086s 1.754187091s 1.769295745s 1.777758628s 1.779936915s 1.811623163s 1.81356309s 1.830137559s 1.830668117s 1.845207798s 1.850901017s 1.851376559s 1.856735781s 1.859593272s 1.869628663s 1.870562245s 1.875698418s 1.889804642s 1.910354405s 1.919772807s 1.935422847s 1.952076969s 1.964981422s 1.971930445s 1.971992603s 1.972330975s 1.987349003s 1.989318461s 1.999689595s 2.008140575s 2.026584831s 2.041415193s 2.044768198s 2.052224523s 2.062531544s 2.114919011s 2.116612397s 2.191850423s 2.226217038s 2.22773062s 2.236003338s 2.328136659s]
Jan 25 11:26:37.183: INFO: 50 %ile: 1.338231483s
Jan 25 11:26:37.183: INFO: 90 %ile: 1.964981422s
Jan 25 11:26:37.183: INFO: 99 %ile: 2.236003338s
Jan 25 11:26:37.183: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:26:37.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-5134" for this suite.

• [SLOW TEST:29.246 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":279,"completed":233,"skipped":3937,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:26:37.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-369e451f-ca97-4ec2-9150-b81eacbef041
STEP: Creating a pod to test consume secrets
Jan 25 11:26:37.584: INFO: Waiting up to 5m0s for pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422" in namespace "secrets-5471" to be "success or failure"
Jan 25 11:26:37.770: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 185.491564ms
Jan 25 11:26:39.776: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191222282s
Jan 25 11:26:41.788: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203238831s
Jan 25 11:26:43.958: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373581332s
Jan 25 11:26:46.153: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568131301s
Jan 25 11:26:48.162: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 10.57734476s
Jan 25 11:26:50.227: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Pending", Reason="", readiness=false. Elapsed: 12.642240388s
Jan 25 11:26:52.242: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.657583489s
STEP: Saw pod success
Jan 25 11:26:52.243: INFO: Pod "pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422" satisfied condition "success or failure"
Jan 25 11:26:52.248: INFO: Trying to get logs from node jerma-node pod pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:26:52.503: INFO: Waiting for pod pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422 to disappear
Jan 25 11:26:52.521: INFO: Pod pod-secrets-60b510e1-0ebb-4867-a905-ea8d423ca422 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:26:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5471" for this suite.

• [SLOW TEST:15.430 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":234,"skipped":3959,"failed":0}
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:26:52.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:27:05.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2226" for this suite.

• [SLOW TEST:12.632 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":235,"skipped":3963,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:27:05.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-e02b61bc-18e2-4619-9023-77dd2230bd65
STEP: Creating a pod to test consume secrets
Jan 25 11:27:06.011: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453" in namespace "projected-445" to be "success or failure"
Jan 25 11:27:06.139: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 128.272022ms
Jan 25 11:27:08.361: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349483411s
Jan 25 11:27:10.384: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37280213s
Jan 25 11:27:12.442: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430484381s
Jan 25 11:27:14.500: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489115106s
Jan 25 11:27:16.528: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Pending", Reason="", readiness=false. Elapsed: 10.516859064s
Jan 25 11:27:18.542: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.530599728s
STEP: Saw pod success
Jan 25 11:27:18.542: INFO: Pod "pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453" satisfied condition "success or failure"
Jan 25 11:27:18.620: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 11:27:18.800: INFO: Waiting for pod pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453 to disappear
Jan 25 11:27:18.839: INFO: Pod pod-projected-secrets-53890f10-a002-4f08-b6f7-2ad550ae0453 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:27:18.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-445" for this suite.

• [SLOW TEST:13.648 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":236,"skipped":3965,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:27:19.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 11:27:39.523: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:39.548: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:41.550: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:41.565: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:43.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:43.555: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:45.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:45.553: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:47.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:47.558: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:49.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:49.558: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:51.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:51.557: INFO: Pod pod-with-prestop-http-hook still exists
Jan 25 11:27:53.549: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 25 11:27:53.561: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:27:53.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6135" for this suite.

• [SLOW TEST:34.608 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":279,"completed":237,"skipped":3967,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:27:53.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6753.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6753.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6753.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6753.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6753.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 11:28:10.102: INFO: DNS probes using dns-6753/dns-test-6e443ebe-ce30-4bb0-a0c3-4345e3394ae3 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:28:10.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6753" for this suite.

• [SLOW TEST:16.636 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":279,"completed":238,"skipped":3973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:28:10.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-7077/configmap-test-c00b42b0-5f1e-46b1-adfb-3cbc1f779cd8
STEP: Creating a pod to test consume configMaps
Jan 25 11:28:10.536: INFO: Waiting up to 5m0s for pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3" in namespace "configmap-7077" to be "success or failure"
Jan 25 11:28:10.592: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 55.714167ms
Jan 25 11:28:12.600: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063965609s
Jan 25 11:28:14.615: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079255612s
Jan 25 11:28:16.719: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183234917s
Jan 25 11:28:18.728: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192133743s
Jan 25 11:28:20.793: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.257088513s
Jan 25 11:28:22.800: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.264060779s
Jan 25 11:28:24.812: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.276253032s
Jan 25 11:28:26.828: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.291718867s
STEP: Saw pod success
Jan 25 11:28:26.828: INFO: Pod "pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3" satisfied condition "success or failure"
Jan 25 11:28:26.881: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3 container env-test: 
STEP: delete the pod
Jan 25 11:28:27.068: INFO: Waiting for pod pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3 to disappear
Jan 25 11:28:27.074: INFO: Pod pod-configmaps-f50978b0-69c5-41c5-9c84-5ba897ca8ac3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:28:27.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7077" for this suite.

• [SLOW TEST:16.807 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":279,"completed":239,"skipped":4001,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:28:27.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:28:27.326: INFO: Waiting up to 5m0s for pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358" in namespace "projected-6985" to be "success or failure"
Jan 25 11:28:27.436: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 109.781012ms
Jan 25 11:28:29.444: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118245487s
Jan 25 11:28:31.457: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131067422s
Jan 25 11:28:33.468: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14257051s
Jan 25 11:28:35.477: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151383048s
Jan 25 11:28:37.486: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159635399s
Jan 25 11:28:39.496: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.169888719s
STEP: Saw pod success
Jan 25 11:28:39.496: INFO: Pod "downwardapi-volume-239f5183-f980-4175-a646-d682941cc358" satisfied condition "success or failure"
Jan 25 11:28:39.501: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-239f5183-f980-4175-a646-d682941cc358 container client-container: 
STEP: delete the pod
Jan 25 11:28:39.555: INFO: Waiting for pod downwardapi-volume-239f5183-f980-4175-a646-d682941cc358 to disappear
Jan 25 11:28:39.562: INFO: Pod downwardapi-volume-239f5183-f980-4175-a646-d682941cc358 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:28:39.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6985" for this suite.

• [SLOW TEST:12.495 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":240,"skipped":4013,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:28:39.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-e632268a-2b1f-4985-9336-55d5aec4be2b
STEP: Creating configMap with name cm-test-opt-upd-26399d92-16f7-4aa9-bfd1-8531b215bc74
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e632268a-2b1f-4985-9336-55d5aec4be2b
STEP: Updating configmap cm-test-opt-upd-26399d92-16f7-4aa9-bfd1-8531b215bc74
STEP: Creating configMap with name cm-test-opt-create-3998514c-4530-476d-a959-92d8db505588
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:28:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6159" for this suite.

• [SLOW TEST:18.659 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":241,"skipped":4015,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:28:58.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Jan 25 11:28:58.475: INFO: Waiting up to 5m0s for pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92" in namespace "containers-9880" to be "success or failure"
Jan 25 11:28:58.482: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276764ms
Jan 25 11:29:00.491: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015772204s
Jan 25 11:29:02.502: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026796262s
Jan 25 11:29:04.514: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038816365s
Jan 25 11:29:06.525: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049298066s
Jan 25 11:29:08.543: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.067576545s
Jan 25 11:29:10.555: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080053987s
STEP: Saw pod success
Jan 25 11:29:10.556: INFO: Pod "client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92" satisfied condition "success or failure"
Jan 25 11:29:10.561: INFO: Trying to get logs from node jerma-node pod client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92 container test-container: 
STEP: delete the pod
Jan 25 11:29:10.697: INFO: Waiting for pod client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92 to disappear
Jan 25 11:29:10.701: INFO: Pod client-containers-9444ccf4-6b18-48ca-865c-7a4fa7285d92 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:29:10.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9880" for this suite.

• [SLOW TEST:12.464 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":279,"completed":242,"skipped":4049,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:29:10.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-3271
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3271 to expose endpoints map[]
Jan 25 11:29:11.108: INFO: successfully validated that service multi-endpoint-test in namespace services-3271 exposes endpoints map[] (24.003984ms elapsed)
STEP: Creating pod pod1 in namespace services-3271
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3271 to expose endpoints map[pod1:[100]]
Jan 25 11:29:15.316: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.191133052s elapsed, will retry)
Jan 25 11:29:20.516: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.39107193s elapsed, will retry)
Jan 25 11:29:22.538: INFO: successfully validated that service multi-endpoint-test in namespace services-3271 exposes endpoints map[pod1:[100]] (11.413002267s elapsed)
STEP: Creating pod pod2 in namespace services-3271
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3271 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 25 11:29:29.182: INFO: Unexpected endpoints: found map[755114cc-62bf-4346-92af-b807b9d32222:[100]], expected map[pod1:[100] pod2:[101]] (6.635059134s elapsed, will retry)
Jan 25 11:29:32.345: INFO: successfully validated that service multi-endpoint-test in namespace services-3271 exposes endpoints map[pod1:[100] pod2:[101]] (9.797228167s elapsed)
STEP: Deleting pod pod1 in namespace services-3271
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3271 to expose endpoints map[pod2:[101]]
Jan 25 11:29:32.375: INFO: successfully validated that service multi-endpoint-test in namespace services-3271 exposes endpoints map[pod2:[101]] (24.138686ms elapsed)
STEP: Deleting pod pod2 in namespace services-3271
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3271 to expose endpoints map[]
Jan 25 11:29:32.481: INFO: successfully validated that service multi-endpoint-test in namespace services-3271 exposes endpoints map[] (98.927974ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:29:32.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3271" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:21.838 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":279,"completed":243,"skipped":4054,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:29:32.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:29:32.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a" in namespace "downward-api-8257" to be "success or failure"
Jan 25 11:29:32.815: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.262075ms
Jan 25 11:29:35.115: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314766574s
Jan 25 11:29:37.165: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364869598s
Jan 25 11:29:39.171: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370760717s
Jan 25 11:29:41.182: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381189878s
Jan 25 11:29:43.204: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.403321083s
Jan 25 11:29:45.214: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.413188262s
Jan 25 11:29:47.252: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.451673151s
STEP: Saw pod success
Jan 25 11:29:47.252: INFO: Pod "downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a" satisfied condition "success or failure"
Jan 25 11:29:47.259: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a container client-container: 
STEP: delete the pod
Jan 25 11:29:47.311: INFO: Waiting for pod downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a to disappear
Jan 25 11:29:47.429: INFO: Pod downwardapi-volume-47b8c758-23c8-4a15-b260-17ac671b0e3a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:29:47.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8257" for this suite.

• [SLOW TEST:14.889 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":244,"skipped":4057,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:29:47.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:29:54.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5492" for this suite.

• [SLOW TEST:7.172 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":279,"completed":245,"skipped":4096,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:29:54.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:29:55.498: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:29:57.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:29:59.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:30:01.522: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:30:03.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:30:05.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548595, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:30:08.898: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:30:09.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4691" for this suite.
STEP: Destroying namespace "webhook-4691-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:15.382 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":279,"completed":246,"skipped":4105,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:30:10.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7189bf76-fe66-4fba-8ce2-d7633b372a84
STEP: Creating a pod to test consume configMaps
Jan 25 11:30:10.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952" in namespace "configmap-6887" to be "success or failure"
Jan 25 11:30:10.193: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Pending", Reason="", readiness=false. Elapsed: 38.547691ms
Jan 25 11:30:12.209: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054360332s
Jan 25 11:30:14.237: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081999505s
Jan 25 11:30:16.246: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091315494s
Jan 25 11:30:18.256: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101538258s
Jan 25 11:30:20.265: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110513783s
STEP: Saw pod success
Jan 25 11:30:20.266: INFO: Pod "pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952" satisfied condition "success or failure"
Jan 25 11:30:20.271: INFO: Trying to get logs from node jerma-node pod pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952 container configmap-volume-test: 
STEP: delete the pod
Jan 25 11:30:20.403: INFO: Waiting for pod pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952 to disappear
Jan 25 11:30:20.417: INFO: Pod pod-configmaps-87d57c32-ba4e-4019-a124-12528ef44952 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:30:20.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6887" for this suite.

• [SLOW TEST:10.454 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":247,"skipped":4113,"failed":0}
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:30:20.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-fea21ba1-5b1d-4c6d-a3ab-4525cca912e1
STEP: Creating configMap with name cm-test-opt-upd-aceedbb5-377d-4674-84f2-3aa0689b39f2
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-fea21ba1-5b1d-4c6d-a3ab-4525cca912e1
STEP: Updating configmap cm-test-opt-upd-aceedbb5-377d-4674-84f2-3aa0689b39f2
STEP: Creating configMap with name cm-test-opt-create-76c719ef-ffb3-45ef-8ff3-92f805f89ff5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:30:41.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9880" for this suite.

• [SLOW TEST:20.658 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":248,"skipped":4113,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:30:41.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-7967
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 11:30:41.218: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 25 11:30:41.398: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:43.404: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:45.405: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:47.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:49.899: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:51.406: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 25 11:30:53.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:30:55.407: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:30:57.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:30:59.407: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:31:01.410: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:31:03.405: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:31:05.404: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 25 11:31:07.405: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 25 11:31:07.416: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 25 11:31:19.454: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.2&port=8080&tries=1'] Namespace:pod-network-test-7967 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:31:19.455: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:31:19.501401       9 log.go:172] (0xc002c46160) (0xc002346960) Create stream
I0125 11:31:19.501555       9 log.go:172] (0xc002c46160) (0xc002346960) Stream added, broadcasting: 1
I0125 11:31:19.505603       9 log.go:172] (0xc002c46160) Reply frame received for 1
I0125 11:31:19.505689       9 log.go:172] (0xc002c46160) (0xc002a1a780) Create stream
I0125 11:31:19.505703       9 log.go:172] (0xc002c46160) (0xc002a1a780) Stream added, broadcasting: 3
I0125 11:31:19.508602       9 log.go:172] (0xc002c46160) Reply frame received for 3
I0125 11:31:19.508656       9 log.go:172] (0xc002c46160) (0xc000b66460) Create stream
I0125 11:31:19.508687       9 log.go:172] (0xc002c46160) (0xc000b66460) Stream added, broadcasting: 5
I0125 11:31:19.510015       9 log.go:172] (0xc002c46160) Reply frame received for 5
I0125 11:31:19.597465       9 log.go:172] (0xc002c46160) Data frame received for 3
I0125 11:31:19.597690       9 log.go:172] (0xc002a1a780) (3) Data frame handling
I0125 11:31:19.597773       9 log.go:172] (0xc002a1a780) (3) Data frame sent
I0125 11:31:19.726083       9 log.go:172] (0xc002c46160) Data frame received for 1
I0125 11:31:19.726478       9 log.go:172] (0xc002c46160) (0xc000b66460) Stream removed, broadcasting: 5
I0125 11:31:19.726635       9 log.go:172] (0xc002346960) (1) Data frame handling
I0125 11:31:19.726684       9 log.go:172] (0xc002346960) (1) Data frame sent
I0125 11:31:19.726710       9 log.go:172] (0xc002c46160) (0xc002a1a780) Stream removed, broadcasting: 3
I0125 11:31:19.726781       9 log.go:172] (0xc002c46160) (0xc002346960) Stream removed, broadcasting: 1
I0125 11:31:19.726839       9 log.go:172] (0xc002c46160) Go away received
I0125 11:31:19.727332       9 log.go:172] (0xc002c46160) (0xc002346960) Stream removed, broadcasting: 1
I0125 11:31:19.727357       9 log.go:172] (0xc002c46160) (0xc002a1a780) Stream removed, broadcasting: 3
I0125 11:31:19.727377       9 log.go:172] (0xc002c46160) (0xc000b66460) Stream removed, broadcasting: 5
Jan 25 11:31:19.727: INFO: Waiting for responses: map[]
Jan 25 11:31:19.736: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-7967 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 11:31:19.736: INFO: >>> kubeConfig: /root/.kube/config
I0125 11:31:19.786162       9 log.go:172] (0xc0016346e0) (0xc002346fa0) Create stream
I0125 11:31:19.786443       9 log.go:172] (0xc0016346e0) (0xc002346fa0) Stream added, broadcasting: 1
I0125 11:31:19.801867       9 log.go:172] (0xc0016346e0) Reply frame received for 1
I0125 11:31:19.802213       9 log.go:172] (0xc0016346e0) (0xc000b666e0) Create stream
I0125 11:31:19.802259       9 log.go:172] (0xc0016346e0) (0xc000b666e0) Stream added, broadcasting: 3
I0125 11:31:19.804615       9 log.go:172] (0xc0016346e0) Reply frame received for 3
I0125 11:31:19.804709       9 log.go:172] (0xc0016346e0) (0xc001868780) Create stream
I0125 11:31:19.804757       9 log.go:172] (0xc0016346e0) (0xc001868780) Stream added, broadcasting: 5
I0125 11:31:19.807127       9 log.go:172] (0xc0016346e0) Reply frame received for 5
I0125 11:31:19.961346       9 log.go:172] (0xc0016346e0) Data frame received for 3
I0125 11:31:19.961498       9 log.go:172] (0xc000b666e0) (3) Data frame handling
I0125 11:31:19.961534       9 log.go:172] (0xc000b666e0) (3) Data frame sent
I0125 11:31:20.040987       9 log.go:172] (0xc0016346e0) (0xc000b666e0) Stream removed, broadcasting: 3
I0125 11:31:20.041494       9 log.go:172] (0xc0016346e0) (0xc001868780) Stream removed, broadcasting: 5
I0125 11:31:20.041785       9 log.go:172] (0xc0016346e0) Data frame received for 1
I0125 11:31:20.041885       9 log.go:172] (0xc002346fa0) (1) Data frame handling
I0125 11:31:20.041910       9 log.go:172] (0xc002346fa0) (1) Data frame sent
I0125 11:31:20.041936       9 log.go:172] (0xc0016346e0) (0xc002346fa0) Stream removed, broadcasting: 1
I0125 11:31:20.041974       9 log.go:172] (0xc0016346e0) Go away received
I0125 11:31:20.042470       9 log.go:172] (0xc0016346e0) (0xc002346fa0) Stream removed, broadcasting: 1
I0125 11:31:20.042499       9 log.go:172] (0xc0016346e0) (0xc000b666e0) Stream removed, broadcasting: 3
I0125 11:31:20.042513       9 log.go:172] (0xc0016346e0) (0xc001868780) Stream removed, broadcasting: 5
Jan 25 11:31:20.042: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:31:20.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7967" for this suite.

• [SLOW TEST:38.945 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":279,"completed":249,"skipped":4139,"failed":0}
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:31:20.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-107f4e1f-f1f3-427b-bb91-999f384f482a in namespace container-probe-6882
Jan 25 11:31:37.041: INFO: Started pod liveness-107f4e1f-f1f3-427b-bb91-999f384f482a in namespace container-probe-6882
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:31:37.047: INFO: Initial restart count of pod liveness-107f4e1f-f1f3-427b-bb91-999f384f482a is 0
Jan 25 11:32:03.167: INFO: Restart count of pod container-probe-6882/liveness-107f4e1f-f1f3-427b-bb91-999f384f482a is now 1 (26.119875676s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:32:03.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6882" for this suite.

• [SLOW TEST:43.266 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":279,"completed":250,"skipped":4139,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:32:03.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:32:04.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:32:06.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:08.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:10.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:12.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:14.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:16.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548725, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548724, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:32:19.405: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:32:19.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1724-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:32:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9272" for this suite.
STEP: Destroying namespace "webhook-9272-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.847 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":279,"completed":251,"skipped":4158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:32:21.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:32:38.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8558" for this suite.

• [SLOW TEST:17.067 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":279,"completed":252,"skipped":4217,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:32:38.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:32:39.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:32:41.425: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:43.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:32:45.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548759, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:32:48.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:32:48.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8686" for this suite.
STEP: Destroying namespace "webhook-8686-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.432 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":279,"completed":253,"skipped":4217,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:32:48.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:32:49.108: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f" in namespace "security-context-test-6763" to be "success or failure"
Jan 25 11:32:49.115: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113552ms
Jan 25 11:32:51.151: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042710274s
Jan 25 11:32:53.196: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087521748s
Jan 25 11:32:55.261: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152778059s
Jan 25 11:32:57.275: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.166656436s
Jan 25 11:32:59.285: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.17601624s
Jan 25 11:33:01.294: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.185790976s
Jan 25 11:33:01.295: INFO: Pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f" satisfied condition "success or failure"
Jan 25 11:33:01.341: INFO: Got logs for pod "busybox-privileged-false-be1b23ee-3f37-4592-a193-09d3c706063f": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:33:01.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6763" for this suite.

• [SLOW TEST:12.697 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":254,"skipped":4230,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:33:01.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:33:01.675: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 25 11:33:07.147: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 11:33:09.479: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 25 11:33:11.487: INFO: Creating deployment "test-rollover-deployment"
Jan 25 11:33:11.542: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 25 11:33:13.562: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 25 11:33:13.570: INFO: Ensure that both replica sets have 1 created replica
Jan 25 11:33:13.578: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 25 11:33:13.595: INFO: Updating deployment test-rollover-deployment
Jan 25 11:33:13.595: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 25 11:33:15.680: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 25 11:33:15.690: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 25 11:33:15.699: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:15.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548794, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:17.745: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:17.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548794, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:19.716: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:19.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548794, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:21.715: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:21.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548794, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:23.725: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:23.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548803, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:25.734: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:25.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548803, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:27.754: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:27.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548803, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:29.712: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:29.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548803, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:31.714: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 11:33:31.715: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548803, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715548791, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:33:33.791: INFO: 
Jan 25 11:33:33.791: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 25 11:33:33.828: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-992 /apis/apps/v1/namespaces/deployment-992/deployments/test-rollover-deployment df0f8dc1-6421-4add-b4ef-5cf68e747eca 4239931 2 2020-01-25 11:33:11 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002708738  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 11:33:11 +0000 UTC,LastTransitionTime:2020-01-25 11:33:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-25 11:33:33 +0000 UTC,LastTransitionTime:2020-01-25 11:33:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 11:33:33.839: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-992 /apis/apps/v1/namespaces/deployment-992/replicasets/test-rollover-deployment-574d6dfbff 437d2ec6-0d00-4619-8c12-0f799ea6bcd7 4239921 2 2020-01-25 11:33:13 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment df0f8dc1-6421-4add-b4ef-5cf68e747eca 0xc000d59987 0xc000d59988}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000d59a48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 11:33:33.839: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 25 11:33:33.839: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-992 /apis/apps/v1/namespaces/deployment-992/replicasets/test-rollover-controller b8f11eff-8baf-44ca-ac1c-37af8de6c6ec 4239930 2 2020-01-25 11:33:01 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment df0f8dc1-6421-4add-b4ef-5cf68e747eca 0xc000d597df 0xc000d59810}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000d598e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 11:33:33.840: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-992 /apis/apps/v1/namespaces/deployment-992/replicasets/test-rollover-deployment-f6c94f66c fdaecd40-93b6-4330-9abe-e4d96289745d 4239866 2 2020-01-25 11:33:11 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment df0f8dc1-6421-4add-b4ef-5cf68e747eca 0xc000d59b30 0xc000d59b31}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000d59be8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 11:33:33.850: INFO: Pod "test-rollover-deployment-574d6dfbff-b7xbq" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-b7xbq test-rollover-deployment-574d6dfbff- deployment-992 /api/v1/namespaces/deployment-992/pods/test-rollover-deployment-574d6dfbff-b7xbq ec704079-5150-427e-9942-0212666cee31 4239895 0 2020-01-25 11:33:13 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 437d2ec6-0d00-4619-8c12-0f799ea6bcd7 0xc0027368a7 0xc0027368a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zgp9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zgp9q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zgp9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 11:33:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 11:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 11:33:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 11:33:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 11:33:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 11:33:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://e188c07ff5af18d76f8d41ae521ebba72f34db5c9ad14e0726496c9af3c8c64c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:33:33.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-992" for this suite.

• [SLOW TEST:32.541 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":279,"completed":255,"skipped":4246,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:33:33.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-70fbba42-9dbf-42b4-a375-2226af38c966
STEP: Creating a pod to test consume secrets
Jan 25 11:33:34.074: INFO: Waiting up to 5m0s for pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24" in namespace "secrets-5887" to be "success or failure"
Jan 25 11:33:34.238: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 163.549576ms
Jan 25 11:33:36.246: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171832569s
Jan 25 11:33:38.261: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186969002s
Jan 25 11:33:40.303: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228803822s
Jan 25 11:33:42.312: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237955307s
Jan 25 11:33:44.320: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 10.2463091s
Jan 25 11:33:46.330: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 12.255874321s
Jan 25 11:33:48.342: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Pending", Reason="", readiness=false. Elapsed: 14.267615219s
Jan 25 11:33:50.356: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.281663009s
STEP: Saw pod success
Jan 25 11:33:50.356: INFO: Pod "pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24" satisfied condition "success or failure"
Jan 25 11:33:50.361: INFO: Trying to get logs from node jerma-node pod pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:33:50.395: INFO: Waiting for pod pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24 to disappear
Jan 25 11:33:50.409: INFO: Pod pod-secrets-1f510f4c-70d0-44dc-a18a-3d0e71f9be24 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:33:50.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5887" for this suite.

• [SLOW TEST:16.521 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":279,"completed":256,"skipped":4259,"failed":0}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:33:50.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:33:50.597: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 24.523972ms)
Jan 25 11:33:50.602: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.359507ms)
Jan 25 11:33:50.607: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.138572ms)
Jan 25 11:33:50.611: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.427769ms)
Jan 25 11:33:50.614: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.304223ms)
Jan 25 11:33:50.618: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.222847ms)
Jan 25 11:33:50.621: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.620015ms)
Jan 25 11:33:50.626: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.209947ms)
Jan 25 11:33:50.631: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.12177ms)
Jan 25 11:33:50.634: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.394996ms)
Jan 25 11:33:50.638: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.265521ms)
Jan 25 11:33:50.641: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.628681ms)
Jan 25 11:33:50.644: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.935271ms)
Jan 25 11:33:50.647: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 2.569822ms)
Jan 25 11:33:50.651: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.459045ms)
Jan 25 11:33:50.655: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.295914ms)
Jan 25 11:33:50.659: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 4.481178ms)
Jan 25 11:33:50.663: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.556427ms)
Jan 25 11:33:50.668: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 5.052542ms)
Jan 25 11:33:50.671: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
apt/
... (200; 3.039962ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:33:50.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8874" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":279,"completed":257,"skipped":4265,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:33:50.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-552c9b5d-3cc4-4c75-b4ad-2fd22fc41ef5
STEP: Creating a pod to test consume secrets
Jan 25 11:33:50.874: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c" in namespace "projected-7413" to be "success or failure"
Jan 25 11:33:50.919: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.388487ms
Jan 25 11:33:52.926: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052251964s
Jan 25 11:33:54.937: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063028619s
Jan 25 11:33:56.950: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076270443s
Jan 25 11:33:58.962: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088321912s
Jan 25 11:34:00.970: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09595396s
STEP: Saw pod success
Jan 25 11:34:00.970: INFO: Pod "pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c" satisfied condition "success or failure"
Jan 25 11:34:00.973: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c container secret-volume-test: 
STEP: delete the pod
Jan 25 11:34:01.013: INFO: Waiting for pod pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c to disappear
Jan 25 11:34:01.103: INFO: Pod pod-projected-secrets-72b1abf1-0441-408b-8c75-b2897c67b74c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:34:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7413" for this suite.

• [SLOW TEST:10.431 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":279,"completed":258,"skipped":4278,"failed":0}
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:34:01.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:34:06.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5071" for this suite.

• [SLOW TEST:5.262 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":279,"completed":259,"skipped":4280,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:34:06.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-4721fb3b-0a11-4adb-b5c2-aa1120410ebc
STEP: Creating a pod to test consume secrets
Jan 25 11:34:06.507: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598" in namespace "projected-5372" to be "success or failure"
Jan 25 11:34:06.588: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Pending", Reason="", readiness=false. Elapsed: 81.132983ms
Jan 25 11:34:08.602: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095379818s
Jan 25 11:34:10.619: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1116269s
Jan 25 11:34:12.630: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123337465s
Jan 25 11:34:14.642: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135338315s
Jan 25 11:34:16.658: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150975878s
STEP: Saw pod success
Jan 25 11:34:16.659: INFO: Pod "pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598" satisfied condition "success or failure"
Jan 25 11:34:16.664: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 11:34:16.847: INFO: Waiting for pod pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598 to disappear
Jan 25 11:34:16.906: INFO: Pod pod-projected-secrets-cf668839-0cca-454f-94d4-a35480e42598 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:34:16.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5372" for this suite.

• [SLOW TEST:10.542 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":279,"completed":260,"skipped":4283,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:34:16.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 11:34:29.876: INFO: Successfully updated pod "annotationupdate8dc02d87-b0ab-41fa-8ad0-7938650b3644"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:34:31.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8470" for this suite.

• [SLOW TEST:15.067 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":279,"completed":261,"skipped":4293,"failed":0}
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:34:31.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-f59fdfde-85f8-4866-9626-0c1d1a86024f
STEP: Creating secret with name secret-projected-all-test-volume-e9058b81-853c-42ef-a431-4796eca3a622
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 25 11:34:32.198: INFO: Waiting up to 5m0s for pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0" in namespace "projected-3870" to be "success or failure"
Jan 25 11:34:32.202: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.628279ms
Jan 25 11:34:34.210: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012242894s
Jan 25 11:34:36.220: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022408701s
Jan 25 11:34:38.230: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03248854s
Jan 25 11:34:40.244: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046067405s
Jan 25 11:34:42.250: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.051829388s
Jan 25 11:34:44.261: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063780171s
Jan 25 11:34:46.270: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.072581129s
STEP: Saw pod success
Jan 25 11:34:46.271: INFO: Pod "projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0" satisfied condition "success or failure"
Jan 25 11:34:46.276: INFO: Trying to get logs from node jerma-node pod projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0 container projected-all-volume-test: 
STEP: delete the pod
Jan 25 11:34:46.946: INFO: Waiting for pod projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0 to disappear
Jan 25 11:34:46.961: INFO: Pod projected-volume-9e3b3a9e-49cd-4ff4-a6a8-787155e092a0 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:34:46.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3870" for this suite.

• [SLOW TEST:15.010 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":279,"completed":262,"skipped":4293,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:34:47.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:35:03.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2202" for this suite.

• [SLOW TEST:16.595 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":279,"completed":263,"skipped":4300,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:35:03.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-042d6ee3-8b50-4cb3-bedf-239dc5676336
STEP: Creating a pod to test consume configMaps
Jan 25 11:35:03.799: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11" in namespace "projected-8593" to be "success or failure"
Jan 25 11:35:03.808: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.45524ms
Jan 25 11:35:05.857: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057224684s
Jan 25 11:35:07.942: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143036465s
Jan 25 11:35:09.961: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161639803s
Jan 25 11:35:11.972: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172334277s
STEP: Saw pod success
Jan 25 11:35:11.972: INFO: Pod "pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11" satisfied condition "success or failure"
Jan 25 11:35:11.977: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 11:35:12.029: INFO: Waiting for pod pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11 to disappear
Jan 25 11:35:12.039: INFO: Pod pod-projected-configmaps-afb0cce7-2bd5-4057-bb75-da03b3a93b11 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:35:12.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8593" for this suite.

• [SLOW TEST:8.459 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":279,"completed":264,"skipped":4305,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:35:12.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 11:35:12.384: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:35:13.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6852" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":279,"completed":265,"skipped":4307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:35:13.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-a816b0a5-35b8-4d90-8a85-f4056dd4447b
STEP: Creating secret with name s-test-opt-upd-43756291-a79c-4a5c-a085-db85feee9835
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a816b0a5-35b8-4d90-8a85-f4056dd4447b
STEP: Updating secret s-test-opt-upd-43756291-a79c-4a5c-a085-db85feee9835
STEP: Creating secret with name s-test-opt-create-4ae408f8-ab3f-4738-b2b7-2e639b51d9c4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:36:59.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7607" for this suite.

• [SLOW TEST:105.680 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":279,"completed":266,"skipped":4344,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:36:59.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:37:00.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:37:02.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:37:04.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:37:06.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:37:08.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549020, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:37:11.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:37:11.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1089" for this suite.
STEP: Destroying namespace "webhook-1089-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.342 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":279,"completed":267,"skipped":4351,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:37:11.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 25 11:37:12.005: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 25 11:37:17.019: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:37:17.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6754" for this suite.

• [SLOW TEST:5.385 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":279,"completed":268,"skipped":4355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:37:17.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:37:28.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5093" for this suite.

• [SLOW TEST:11.561 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":279,"completed":269,"skipped":4383,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:37:28.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 25 11:37:29.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241009 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:37:29.022: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241009 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 25 11:37:39.032: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241038 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:37:39.033: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241038 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 25 11:37:49.126: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241062 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:37:49.127: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241062 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 25 11:37:59.140: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241082 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:37:59.141: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-a a6250e20-db6f-48b2-a153-1cb73ae99dfd 4241082 0 2020-01-25 11:37:29 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 25 11:38:09.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-b 29661712-f6a7-4ddd-8846-54bc399f520d 4241108 0 2020-01-25 11:38:09 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:38:09.157: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-b 29661712-f6a7-4ddd-8846-54bc399f520d 4241108 0 2020-01-25 11:38:09 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 25 11:38:19.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-b 29661712-f6a7-4ddd-8846-54bc399f520d 4241132 0 2020-01-25 11:38:09 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 25 11:38:19.168: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7076 /api/v1/namespaces/watch-7076/configmaps/e2e-watch-test-configmap-b 29661712-f6a7-4ddd-8846-54bc399f520d 4241132 0 2020-01-25 11:38:09 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:38:29.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7076" for this suite.

• [SLOW TEST:60.394 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":279,"completed":270,"skipped":4390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:38:29.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 11:38:29.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2523'
Jan 25 11:38:32.005: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 11:38:32.006: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Jan 25 11:38:34.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2523'
Jan 25 11:38:34.384: INFO: stderr: ""
Jan 25 11:38:34.385: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:38:34.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2523" for this suite.

• [SLOW TEST:5.211 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":279,"completed":271,"skipped":4438,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:38:34.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 11:38:34.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a" in namespace "projected-3565" to be "success or failure"
Jan 25 11:38:34.673: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 86.826627ms
Jan 25 11:38:36.678: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091752164s
Jan 25 11:38:38.689: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102950551s
Jan 25 11:38:40.707: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120381624s
Jan 25 11:38:42.712: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125625595s
Jan 25 11:38:44.721: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134978594s
Jan 25 11:38:46.729: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.142752673s
STEP: Saw pod success
Jan 25 11:38:46.729: INFO: Pod "downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a" satisfied condition "success or failure"
Jan 25 11:38:46.732: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a container client-container: 
STEP: delete the pod
Jan 25 11:38:46.764: INFO: Waiting for pod downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a to disappear
Jan 25 11:38:46.768: INFO: Pod downwardapi-volume-f7f7d8a6-e511-48cc-8057-4297ef1b321a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:38:46.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3565" for this suite.

• [SLOW TEST:12.395 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":279,"completed":272,"skipped":4457,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:38:46.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:38:59.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4018" for this suite.

• [SLOW TEST:13.180 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":279,"completed":273,"skipped":4458,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:38:59.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-18c3c512-cb80-482f-852c-ec3c945cb355 in namespace container-probe-1047
Jan 25 11:39:12.117: INFO: Started pod busybox-18c3c512-cb80-482f-852c-ec3c945cb355 in namespace container-probe-1047
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 11:39:12.121: INFO: Initial restart count of pod busybox-18c3c512-cb80-482f-852c-ec3c945cb355 is 0
Jan 25 11:40:02.617: INFO: Restart count of pod container-probe-1047/busybox-18c3c512-cb80-482f-852c-ec3c945cb355 is now 1 (50.496008191s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:40:02.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1047" for this suite.

• [SLOW TEST:62.848 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":279,"completed":274,"skipped":4470,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:40:02.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 11:40:03.954: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 11:40:05.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549203, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:40:08.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549203, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:40:09.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549203, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 11:40:11.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549204, loc:(*time.Location)(0x7e51ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715549203, loc:(*time.Location)(0x7e51ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 11:40:15.144: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:40:15.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3531" for this suite.
STEP: Destroying namespace "webhook-3531-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.889 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":279,"completed":275,"skipped":4492,"failed":0}
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:40:15.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3762
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-3762
I0125 11:40:15.921990       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3762, replica count: 2
I0125 11:40:18.974241       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:40:21.975370       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:40:24.975991       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:40:27.976694       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 11:40:27.976: INFO: Creating new exec pod
Jan 25 11:40:39.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodc6sc9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 25 11:40:39.303: INFO: stderr: "I0125 11:40:39.131864    4932 log.go:172] (0xc0005c2630) (0xc0005d68c0) Create stream\nI0125 11:40:39.131995    4932 log.go:172] (0xc0005c2630) (0xc0005d68c0) Stream added, broadcasting: 1\nI0125 11:40:39.134730    4932 log.go:172] (0xc0005c2630) Reply frame received for 1\nI0125 11:40:39.134768    4932 log.go:172] (0xc0005c2630) (0xc000729540) Create stream\nI0125 11:40:39.134778    4932 log.go:172] (0xc0005c2630) (0xc000729540) Stream added, broadcasting: 3\nI0125 11:40:39.135666    4932 log.go:172] (0xc0005c2630) Reply frame received for 3\nI0125 11:40:39.135692    4932 log.go:172] (0xc0005c2630) (0xc0007295e0) Create stream\nI0125 11:40:39.135705    4932 log.go:172] (0xc0005c2630) (0xc0007295e0) Stream added, broadcasting: 5\nI0125 11:40:39.136585    4932 log.go:172] (0xc0005c2630) Reply frame received for 5\nI0125 11:40:39.228014    4932 log.go:172] (0xc0005c2630) Data frame received for 5\nI0125 11:40:39.228103    4932 log.go:172] (0xc0007295e0) (5) Data frame handling\nI0125 11:40:39.228134    4932 log.go:172] (0xc0007295e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0125 11:40:39.234316    4932 log.go:172] (0xc0005c2630) Data frame received for 5\nI0125 11:40:39.234333    4932 log.go:172] (0xc0007295e0) (5) Data frame handling\nI0125 11:40:39.234347    4932 log.go:172] (0xc0007295e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 11:40:39.299465    4932 log.go:172] (0xc0005c2630) (0xc000729540) Stream removed, broadcasting: 3\nI0125 11:40:39.299536    4932 log.go:172] (0xc0005c2630) Data frame received for 1\nI0125 11:40:39.299546    4932 log.go:172] (0xc0005d68c0) (1) Data frame handling\nI0125 11:40:39.299557    4932 log.go:172] (0xc0005d68c0) (1) Data frame sent\nI0125 11:40:39.299595    4932 log.go:172] (0xc0005c2630) (0xc0005d68c0) Stream removed, broadcasting: 1\nI0125 11:40:39.299865    4932 log.go:172] (0xc0005c2630) (0xc0007295e0) Stream removed, broadcasting: 5\nI0125 11:40:39.299887    4932 log.go:172] (0xc0005c2630) (0xc0005d68c0) Stream removed, broadcasting: 1\nI0125 11:40:39.299895    4932 log.go:172] (0xc0005c2630) (0xc000729540) Stream removed, broadcasting: 3\nI0125 11:40:39.299901    4932 log.go:172] (0xc0005c2630) (0xc0007295e0) Stream removed, broadcasting: 5\nI0125 11:40:39.300088    4932 log.go:172] (0xc0005c2630) Go away received\n"
Jan 25 11:40:39.304: INFO: stdout: ""
Jan 25 11:40:39.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3762 execpodc6sc9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.193.202 80'
Jan 25 11:40:39.671: INFO: stderr: "I0125 11:40:39.508004    4953 log.go:172] (0xc0009da630) (0xc0006bbae0) Create stream\nI0125 11:40:39.508149    4953 log.go:172] (0xc0009da630) (0xc0006bbae0) Stream added, broadcasting: 1\nI0125 11:40:39.511171    4953 log.go:172] (0xc0009da630) Reply frame received for 1\nI0125 11:40:39.511217    4953 log.go:172] (0xc0009da630) (0xc000a6e000) Create stream\nI0125 11:40:39.511227    4953 log.go:172] (0xc0009da630) (0xc000a6e000) Stream added, broadcasting: 3\nI0125 11:40:39.512202    4953 log.go:172] (0xc0009da630) Reply frame received for 3\nI0125 11:40:39.512297    4953 log.go:172] (0xc0009da630) (0xc000354000) Create stream\nI0125 11:40:39.512317    4953 log.go:172] (0xc0009da630) (0xc000354000) Stream added, broadcasting: 5\nI0125 11:40:39.514378    4953 log.go:172] (0xc0009da630) Reply frame received for 5\nI0125 11:40:39.589607    4953 log.go:172] (0xc0009da630) Data frame received for 5\nI0125 11:40:39.589677    4953 log.go:172] (0xc000354000) (5) Data frame handling\nI0125 11:40:39.589705    4953 log.go:172] (0xc000354000) (5) Data frame sent\nI0125 11:40:39.589711    4953 log.go:172] (0xc0009da630) Data frame received for 5\nI0125 11:40:39.589718    4953 log.go:172] (0xc000354000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.193.202 80\nConnection to 10.96.193.202 80 port [tcp/http] succeeded!\nI0125 11:40:39.589780    4953 log.go:172] (0xc000354000) (5) Data frame sent\nI0125 11:40:39.661312    4953 log.go:172] (0xc0009da630) Data frame received for 1\nI0125 11:40:39.661404    4953 log.go:172] (0xc0009da630) (0xc000a6e000) Stream removed, broadcasting: 3\nI0125 11:40:39.661451    4953 log.go:172] (0xc0006bbae0) (1) Data frame handling\nI0125 11:40:39.661472    4953 log.go:172] (0xc0006bbae0) (1) Data frame sent\nI0125 11:40:39.661533    4953 log.go:172] (0xc0009da630) (0xc000354000) Stream removed, broadcasting: 5\nI0125 11:40:39.661576    4953 log.go:172] (0xc0009da630) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0125 11:40:39.661607    4953 log.go:172] (0xc0009da630) Go away received\nI0125 11:40:39.662424    4953 log.go:172] (0xc0009da630) (0xc0006bbae0) Stream removed, broadcasting: 1\nI0125 11:40:39.662445    4953 log.go:172] (0xc0009da630) (0xc000a6e000) Stream removed, broadcasting: 3\nI0125 11:40:39.662456    4953 log.go:172] (0xc0009da630) (0xc000354000) Stream removed, broadcasting: 5\n"
Jan 25 11:40:39.672: INFO: stdout: ""
Jan 25 11:40:39.672: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:40:39.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3762" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:24.060 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":279,"completed":276,"skipped":4492,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:40:39.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-dcebc413-9225-4116-aee8-e6c8789444d9
STEP: Creating a pod to test consume secrets
Jan 25 11:40:39.965: INFO: Waiting up to 5m0s for pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80" in namespace "secrets-8785" to be "success or failure"
Jan 25 11:40:39.985: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 19.607647ms
Jan 25 11:40:41.992: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026723517s
Jan 25 11:40:44.000: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035167122s
Jan 25 11:40:46.044: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078808357s
Jan 25 11:40:48.775: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.810074454s
Jan 25 11:40:51.074: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Pending", Reason="", readiness=false. Elapsed: 11.108806547s
Jan 25 11:40:53.084: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.119110512s
STEP: Saw pod success
Jan 25 11:40:53.085: INFO: Pod "pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80" satisfied condition "success or failure"
Jan 25 11:40:53.087: INFO: Trying to get logs from node jerma-node pod pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80 container secret-volume-test: 
STEP: delete the pod
Jan 25 11:40:53.141: INFO: Waiting for pod pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80 to disappear
Jan 25 11:40:53.280: INFO: Pod pod-secrets-0773eaa8-9027-47cb-8f9d-f86fbf8c3a80 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:40:53.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8785" for this suite.

• [SLOW TEST:13.534 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":279,"completed":277,"skipped":4492,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:40:53.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-9665/configmap-test-502a656d-5f5d-45c8-91c2-f45634301c4a
STEP: Creating a pod to test consume configMaps
Jan 25 11:40:53.454: INFO: Waiting up to 5m0s for pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7" in namespace "configmap-9665" to be "success or failure"
Jan 25 11:40:53.467: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.349447ms
Jan 25 11:40:55.476: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021580594s
Jan 25 11:40:57.530: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07606409s
Jan 25 11:40:59.585: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130538469s
Jan 25 11:41:01.592: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137679103s
Jan 25 11:41:03.669: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214765162s
Jan 25 11:41:05.697: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.242401451s
STEP: Saw pod success
Jan 25 11:41:05.697: INFO: Pod "pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7" satisfied condition "success or failure"
Jan 25 11:41:05.700: INFO: Trying to get logs from node jerma-node pod pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7 container env-test: 
STEP: delete the pod
Jan 25 11:41:05.759: INFO: Waiting for pod pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7 to disappear
Jan 25 11:41:05.777: INFO: Pod pod-configmaps-266a45a5-94d2-462a-9531-f26db938fda7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:41:05.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9665" for this suite.

• [SLOW TEST:12.464 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":279,"completed":278,"skipped":4527,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 11:41:05.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4473
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-4473
I0125 11:41:06.358123       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4473, replica count: 2
I0125 11:41:09.410265       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:41:12.411132       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:41:15.411799       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 11:41:18.412451       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 11:41:18.412: INFO: Creating new exec pod
Jan 25 11:41:27.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4473 execpodc2nfb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 25 11:41:27.956: INFO: stderr: "I0125 11:41:27.739507    4973 log.go:172] (0xc000ac6bb0) (0xc000abe140) Create stream\nI0125 11:41:27.739688    4973 log.go:172] (0xc000ac6bb0) (0xc000abe140) Stream added, broadcasting: 1\nI0125 11:41:27.744075    4973 log.go:172] (0xc000ac6bb0) Reply frame received for 1\nI0125 11:41:27.744121    4973 log.go:172] (0xc000ac6bb0) (0xc000abe1e0) Create stream\nI0125 11:41:27.744134    4973 log.go:172] (0xc000ac6bb0) (0xc000abe1e0) Stream added, broadcasting: 3\nI0125 11:41:27.745733    4973 log.go:172] (0xc000ac6bb0) Reply frame received for 3\nI0125 11:41:27.745762    4973 log.go:172] (0xc000ac6bb0) (0xc000abe280) Create stream\nI0125 11:41:27.745771    4973 log.go:172] (0xc000ac6bb0) (0xc000abe280) Stream added, broadcasting: 5\nI0125 11:41:27.747766    4973 log.go:172] (0xc000ac6bb0) Reply frame received for 5\nI0125 11:41:27.838992    4973 log.go:172] (0xc000ac6bb0) Data frame received for 5\nI0125 11:41:27.839080    4973 log.go:172] (0xc000abe280) (5) Data frame handling\nI0125 11:41:27.839122    4973 log.go:172] (0xc000abe280) (5) Data frame sent\nI0125 11:41:27.839150    4973 log.go:172] (0xc000ac6bb0) Data frame received for 5\nI0125 11:41:27.839171    4973 log.go:172] (0xc000abe280) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nI0125 11:41:27.839252    4973 log.go:172] (0xc000abe280) (5) Data frame sent\nI0125 11:41:27.847477    4973 log.go:172] (0xc000ac6bb0) Data frame received for 5\nI0125 11:41:27.847522    4973 log.go:172] (0xc000abe280) (5) Data frame handling\nI0125 11:41:27.847542    4973 log.go:172] (0xc000abe280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 11:41:27.944007    4973 log.go:172] (0xc000ac6bb0) Data frame received for 1\nI0125 11:41:27.944117    4973 log.go:172] (0xc000abe140) (1) Data frame handling\nI0125 11:41:27.944153    4973 log.go:172] (0xc000abe140) (1) Data frame sent\nI0125 11:41:27.944498    4973 log.go:172] (0xc000ac6bb0) (0xc000abe280) Stream removed, broadcasting: 5\nI0125 11:41:27.944557    4973 log.go:172] (0xc000ac6bb0) (0xc000abe140) Stream removed, broadcasting: 1\nI0125 11:41:27.944835    4973 log.go:172] (0xc000ac6bb0) (0xc000abe1e0) Stream removed, broadcasting: 3\nI0125 11:41:27.945069    4973 log.go:172] (0xc000ac6bb0) Go away received\nI0125 11:41:27.945608    4973 log.go:172] (0xc000ac6bb0) (0xc000abe140) Stream removed, broadcasting: 1\nI0125 11:41:27.945677    4973 log.go:172] (0xc000ac6bb0) (0xc000abe1e0) Stream removed, broadcasting: 3\nI0125 11:41:27.945695    4973 log.go:172] (0xc000ac6bb0) (0xc000abe280) Stream removed, broadcasting: 5\n"
Jan 25 11:41:27.956: INFO: stdout: ""
Jan 25 11:41:27.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4473 execpodc2nfb -- /bin/sh -x -c nc -zv -t -w 2 10.96.113.243 80'
Jan 25 11:41:28.348: INFO: stderr: "I0125 11:41:28.166133    4991 log.go:172] (0xc0009e8580) (0xc0008e6280) Create stream\nI0125 11:41:28.166322    4991 log.go:172] (0xc0009e8580) (0xc0008e6280) Stream added, broadcasting: 1\nI0125 11:41:28.170647    4991 log.go:172] (0xc0009e8580) Reply frame received for 1\nI0125 11:41:28.170686    4991 log.go:172] (0xc0009e8580) (0xc0008e6320) Create stream\nI0125 11:41:28.170696    4991 log.go:172] (0xc0009e8580) (0xc0008e6320) Stream added, broadcasting: 3\nI0125 11:41:28.171843    4991 log.go:172] (0xc0009e8580) Reply frame received for 3\nI0125 11:41:28.171881    4991 log.go:172] (0xc0009e8580) (0xc00062c960) Create stream\nI0125 11:41:28.171895    4991 log.go:172] (0xc0009e8580) (0xc00062c960) Stream added, broadcasting: 5\nI0125 11:41:28.172834    4991 log.go:172] (0xc0009e8580) Reply frame received for 5\nI0125 11:41:28.247243    4991 log.go:172] (0xc0009e8580) Data frame received for 5\nI0125 11:41:28.247378    4991 log.go:172] (0xc00062c960) (5) Data frame handling\nI0125 11:41:28.247407    4991 log.go:172] (0xc00062c960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.113.243 80\nI0125 11:41:28.248694    4991 log.go:172] (0xc0009e8580) Data frame received for 5\nI0125 11:41:28.248712    4991 log.go:172] (0xc00062c960) (5) Data frame handling\nI0125 11:41:28.248725    4991 log.go:172] (0xc00062c960) (5) Data frame sent\nConnection to 10.96.113.243 80 port [tcp/http] succeeded!\nI0125 11:41:28.336534    4991 log.go:172] (0xc0009e8580) (0xc0008e6320) Stream removed, broadcasting: 3\nI0125 11:41:28.336605    4991 log.go:172] (0xc0009e8580) Data frame received for 1\nI0125 11:41:28.336618    4991 log.go:172] (0xc0008e6280) (1) Data frame handling\nI0125 11:41:28.336629    4991 log.go:172] (0xc0008e6280) (1) Data frame sent\nI0125 11:41:28.336693    4991 log.go:172] (0xc0009e8580) (0xc0008e6280) Stream removed, broadcasting: 1\nI0125 11:41:28.337203    4991 log.go:172] (0xc0009e8580) (0xc00062c960) Stream removed, broadcasting: 5\nI0125 11:41:28.337238    4991 log.go:172] (0xc0009e8580) (0xc0008e6280) Stream removed, broadcasting: 1\nI0125 11:41:28.337246    4991 log.go:172] (0xc0009e8580) (0xc0008e6320) Stream removed, broadcasting: 3\nI0125 11:41:28.337253    4991 log.go:172] (0xc0009e8580) (0xc00062c960) Stream removed, broadcasting: 5\nI0125 11:41:28.337469    4991 log.go:172] (0xc0009e8580) Go away received\n"
Jan 25 11:41:28.349: INFO: stdout: ""
Jan 25 11:41:28.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4473 execpodc2nfb -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31578'
Jan 25 11:41:28.779: INFO: stderr: "I0125 11:41:28.536062    5012 log.go:172] (0xc0000f49a0) (0xc00089c000) Create stream\nI0125 11:41:28.536498    5012 log.go:172] (0xc0000f49a0) (0xc00089c000) Stream added, broadcasting: 1\nI0125 11:41:28.541606    5012 log.go:172] (0xc0000f49a0) Reply frame received for 1\nI0125 11:41:28.541731    5012 log.go:172] (0xc0000f49a0) (0xc0006efc20) Create stream\nI0125 11:41:28.541761    5012 log.go:172] (0xc0000f49a0) (0xc0006efc20) Stream added, broadcasting: 3\nI0125 11:41:28.544043    5012 log.go:172] (0xc0000f49a0) Reply frame received for 3\nI0125 11:41:28.544280    5012 log.go:172] (0xc0000f49a0) (0xc0003ce000) Create stream\nI0125 11:41:28.544338    5012 log.go:172] (0xc0000f49a0) (0xc0003ce000) Stream added, broadcasting: 5\nI0125 11:41:28.547763    5012 log.go:172] (0xc0000f49a0) Reply frame received for 5\nI0125 11:41:28.640711    5012 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0125 11:41:28.640954    5012 log.go:172] (0xc0003ce000) (5) Data frame handling\nI0125 11:41:28.641023    5012 log.go:172] (0xc0003ce000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31578\nI0125 11:41:28.645333    5012 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0125 11:41:28.645535    5012 log.go:172] (0xc0003ce000) (5) Data frame handling\nI0125 11:41:28.645565    5012 log.go:172] (0xc0003ce000) (5) Data frame sent\nConnection to 10.96.2.250 31578 port [tcp/31578] succeeded!\nI0125 11:41:28.769446    5012 log.go:172] (0xc0000f49a0) Data frame received for 1\nI0125 11:41:28.769662    5012 log.go:172] (0xc0000f49a0) (0xc0006efc20) Stream removed, broadcasting: 3\nI0125 11:41:28.769792    5012 log.go:172] (0xc00089c000) (1) Data frame handling\nI0125 11:41:28.769843    5012 log.go:172] (0xc00089c000) (1) Data frame sent\nI0125 11:41:28.769872    5012 log.go:172] (0xc0000f49a0) (0xc0003ce000) Stream removed, broadcasting: 5\nI0125 11:41:28.769910    5012 log.go:172] (0xc0000f49a0) (0xc00089c000) Stream removed, broadcasting: 1\nI0125 11:41:28.769945    5012 log.go:172] (0xc0000f49a0) Go away received\nI0125 11:41:28.770679    5012 log.go:172] (0xc0000f49a0) (0xc00089c000) Stream removed, broadcasting: 1\nI0125 11:41:28.770726    5012 log.go:172] (0xc0000f49a0) (0xc0006efc20) Stream removed, broadcasting: 3\nI0125 11:41:28.770738    5012 log.go:172] (0xc0000f49a0) (0xc0003ce000) Stream removed, broadcasting: 5\n"
Jan 25 11:41:28.779: INFO: stdout: ""
Jan 25 11:41:28.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4473 execpodc2nfb -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31578'
Jan 25 11:41:29.210: INFO: stderr: "I0125 11:41:29.025303    5033 log.go:172] (0xc00054af20) (0xc000489e00) Create stream\nI0125 11:41:29.025419    5033 log.go:172] (0xc00054af20) (0xc000489e00) Stream added, broadcasting: 1\nI0125 11:41:29.030392    5033 log.go:172] (0xc00054af20) Reply frame received for 1\nI0125 11:41:29.030436    5033 log.go:172] (0xc00054af20) (0xc00020c000) Create stream\nI0125 11:41:29.030452    5033 log.go:172] (0xc00054af20) (0xc00020c000) Stream added, broadcasting: 3\nI0125 11:41:29.032016    5033 log.go:172] (0xc00054af20) Reply frame received for 3\nI0125 11:41:29.032105    5033 log.go:172] (0xc00054af20) (0xc000489ea0) Create stream\nI0125 11:41:29.032122    5033 log.go:172] (0xc00054af20) (0xc000489ea0) Stream added, broadcasting: 5\nI0125 11:41:29.033472    5033 log.go:172] (0xc00054af20) Reply frame received for 5\nI0125 11:41:29.101310    5033 log.go:172] (0xc00054af20) Data frame received for 5\nI0125 11:41:29.101453    5033 log.go:172] (0xc000489ea0) (5) Data frame handling\nI0125 11:41:29.101494    5033 log.go:172] (0xc000489ea0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31578\nI0125 11:41:29.104193    5033 log.go:172] (0xc00054af20) Data frame received for 5\nI0125 11:41:29.104227    5033 log.go:172] (0xc000489ea0) (5) Data frame handling\nI0125 11:41:29.104253    5033 log.go:172] (0xc000489ea0) (5) Data frame sent\nConnection to 10.96.1.234 31578 port [tcp/31578] succeeded!\nI0125 11:41:29.190349    5033 log.go:172] (0xc00054af20) (0xc00020c000) Stream removed, broadcasting: 3\nI0125 11:41:29.190483    5033 log.go:172] (0xc00054af20) Data frame received for 1\nI0125 11:41:29.190495    5033 log.go:172] (0xc000489e00) (1) Data frame handling\nI0125 11:41:29.190526    5033 log.go:172] (0xc000489e00) (1) Data frame sent\nI0125 11:41:29.190567    5033 log.go:172] (0xc00054af20) (0xc000489e00) Stream removed, broadcasting: 1\nI0125 11:41:29.190799    5033 log.go:172] (0xc00054af20) (0xc000489ea0) Stream removed, broadcasting: 5\nI0125 11:41:29.191012    5033 log.go:172] (0xc00054af20) Go away received\nI0125 11:41:29.191471    5033 log.go:172] (0xc00054af20) (0xc000489e00) Stream removed, broadcasting: 1\nI0125 11:41:29.191523    5033 log.go:172] (0xc00054af20) (0xc00020c000) Stream removed, broadcasting: 3\nI0125 11:41:29.191530    5033 log.go:172] (0xc00054af20) (0xc000489ea0) Stream removed, broadcasting: 5\n"
Jan 25 11:41:29.210: INFO: stdout: ""
Jan 25 11:41:29.210: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 11:41:29.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4473" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:23.475 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":279,"completed":279,"skipped":4530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 25 11:41:29.269: INFO: Running AfterSuite actions on all nodes
Jan 25 11:41:29.270: INFO: Running AfterSuite actions on node 1
Jan 25 11:41:29.270: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":279,"completed":279,"skipped":4566,"failed":0}

Ran 279 of 4845 Specs in 7173.676 seconds
SUCCESS! -- 279 Passed | 0 Failed | 0 Pending | 4566 Skipped
PASS