I0124 23:38:57.175616 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0124 23:38:57.176237 9 e2e.go:109] Starting e2e run "997416f5-d161-4209-ae7b-e3b49d7df842" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579909135 - Will randomize all specs Will run 278 of 4841 specs Jan 24 23:38:57.247: INFO: >>> kubeConfig: /root/.kube/config Jan 24 23:38:57.255: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 23:38:57.315: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 23:38:57.366: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 23:38:57.366: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 24 23:38:57.366: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 23:38:57.381: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 23:38:57.381: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 24 23:38:57.381: INFO: e2e test version: v1.18.0-alpha.1.106+4f70231ce7736c Jan 24 23:38:57.383: INFO: kube-apiserver version: v1.17.0 Jan 24 23:38:57.383: INFO: >>> kubeConfig: /root/.kube/config Jan 24 23:38:57.390: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:38:57.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Jan 24 23:38:57.587: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 23:38:58.198: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 24 23:39:00.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:39:02.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:39:04.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715505938, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 23:39:07.293: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:39:07.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1762-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:39:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9236" for this suite. STEP: Destroying namespace "webhook-9236-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.415 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:39:08.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:39:57.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5753" for this suite. • [SLOW TEST:49.173 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":63,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:39:57.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating server pod server in namespace prestop-5080 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5080 STEP: Deleting pre-stop pod Jan 24 23:40:17.243: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:40:17.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5080" for this suite. • [SLOW TEST:19.307 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":3,"skipped":68,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:40:17.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-103.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-103.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-103.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-103.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 23:40:29.657: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.664: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.671: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.676: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.696: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.700: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.705: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.710: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:29.720: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:34.727: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.732: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.738: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.742: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.757: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.765: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.771: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.775: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:34.784: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:39.731: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.738: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.745: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.751: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.773: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.783: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.801: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:39.816: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:44.730: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.738: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.744: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.750: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.768: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.774: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.781: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:44.878: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:45.301: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:49.955: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:49.968: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:49.976: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:49.987: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:50.021: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:50.033: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:50.036: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:50.043: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:50.054: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:54.730: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.735: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.739: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.745: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.767: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.772: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.777: INFO: Unable to read jessie_udp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.783: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local from pod dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622: the server could not find the requested resource (get pods dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622) Jan 24 23:40:54.799: INFO: Lookups using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local wheezy_udp@dns-test-service-2.dns-103.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-103.svc.cluster.local jessie_udp@dns-test-service-2.dns-103.svc.cluster.local jessie_tcp@dns-test-service-2.dns-103.svc.cluster.local] Jan 24 23:40:59.871: INFO: DNS probes using dns-103/dns-test-8a6abee9-9f71-48bc-af78-39e2b91db622 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:41:00.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-103" for this suite. • [SLOW TEST:42.822 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":4,"skipped":133,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:41:00.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 24 23:41:00.376: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9532 /api/v1/namespaces/watch-9532/configmaps/e2e-watch-test-resource-version 3b369df4-6600-4135-b687-b4d3e4438b7d 4114249 0 2020-01-24 23:41:00 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 23:41:00.376: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9532 /api/v1/namespaces/watch-9532/configmaps/e2e-watch-test-resource-version 3b369df4-6600-4135-b687-b4d3e4438b7d 4114250 0 2020-01-24 23:41:00 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:41:00.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9532" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":5,"skipped":134,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:41:00.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 24 23:41:00.634: INFO: Waiting up to 5m0s for pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6" in namespace "emptydir-3537" to be "success or failure" Jan 24 23:41:00.642: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.278219ms Jan 24 23:41:02.652: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01818991s Jan 24 23:41:04.660: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026167892s Jan 24 23:41:06.667: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032857675s Jan 24 23:41:08.673: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039202922s Jan 24 23:41:10.683: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048582207s Jan 24 23:41:12.692: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.058094673s STEP: Saw pod success Jan 24 23:41:12.692: INFO: Pod "pod-59a785ac-5100-46dc-a55a-6aa163d3efc6" satisfied condition "success or failure" Jan 24 23:41:12.696: INFO: Trying to get logs from node jerma-node pod pod-59a785ac-5100-46dc-a55a-6aa163d3efc6 container test-container: STEP: delete the pod Jan 24 23:41:12.771: INFO: Waiting for pod pod-59a785ac-5100-46dc-a55a-6aa163d3efc6 to disappear Jan 24 23:41:12.809: INFO: Pod pod-59a785ac-5100-46dc-a55a-6aa163d3efc6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:41:12.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3537" for this suite. • [SLOW TEST:12.425 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:41:12.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 23:41:13.907: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 23:41:15.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506074, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:41:17.977: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506074, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:41:19.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506074, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506073, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 23:41:23.023: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:41:35.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-769" for this suite. STEP: Destroying namespace "webhook-769-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:22.653 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":7,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:41:35.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:41:59.721: INFO: Container started at 2020-01-24 23:41:43 +0000 UTC, pod became ready at 2020-01-24 23:41:59 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:41:59.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9247" for this suite. • [SLOW TEST:24.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":202,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:41:59.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-8226 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 23:41:59.967: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 24 23:42:32.148: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.2&port=8080&tries=1'] Namespace:pod-network-test-8226 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 23:42:32.148: INFO: >>> kubeConfig: /root/.kube/config I0124 23:42:32.215605 9 log.go:172] (0xc002aff4a0) (0xc002af4640) Create stream I0124 23:42:32.215752 9 log.go:172] (0xc002aff4a0) (0xc002af4640) Stream added, broadcasting: 1 I0124 23:42:32.226123 9 log.go:172] (0xc002aff4a0) Reply frame received for 1 I0124 23:42:32.226187 9 log.go:172] (0xc002aff4a0) (0xc00295e1e0) Create stream I0124 23:42:32.226206 9 log.go:172] (0xc002aff4a0) (0xc00295e1e0) Stream added, broadcasting: 3 I0124 23:42:32.228576 9 log.go:172] (0xc002aff4a0) Reply frame received for 3 I0124 23:42:32.228631 9 log.go:172] (0xc002aff4a0) (0xc0024219a0) Create stream I0124 23:42:32.228677 9 log.go:172] (0xc002aff4a0) (0xc0024219a0) Stream added, broadcasting: 5 I0124 23:42:32.231876 9 log.go:172] (0xc002aff4a0) Reply frame received for 5 I0124 23:42:32.354418 9 log.go:172] (0xc002aff4a0) Data frame received for 3 I0124 23:42:32.354488 9 log.go:172] (0xc00295e1e0) (3) Data frame handling I0124 23:42:32.354511 9 log.go:172] (0xc00295e1e0) (3) Data frame sent I0124 23:42:32.440843 9 log.go:172] (0xc002aff4a0) Data frame received for 1 I0124 23:42:32.441272 9 log.go:172] (0xc002aff4a0) (0xc0024219a0) Stream removed, broadcasting: 5 I0124 23:42:32.441604 9 log.go:172] (0xc002aff4a0) (0xc00295e1e0) Stream removed, broadcasting: 3 I0124 23:42:32.441699 9 log.go:172] (0xc002af4640) (1) Data frame handling I0124 23:42:32.441736 9 log.go:172] (0xc002af4640) (1) Data frame sent I0124 23:42:32.441782 9 log.go:172] (0xc002aff4a0) (0xc002af4640) Stream removed, broadcasting: 1 I0124 23:42:32.441851 9 log.go:172] (0xc002aff4a0) Go away received I0124 23:42:32.442940 9 log.go:172] (0xc002aff4a0) (0xc002af4640) Stream removed, broadcasting: 1 I0124 23:42:32.442983 9 log.go:172] (0xc002aff4a0) (0xc00295e1e0) Stream removed, broadcasting: 3 I0124 23:42:32.443005 9 log.go:172] (0xc002aff4a0) (0xc0024219a0) Stream removed, broadcasting: 5 Jan 24 23:42:32.443: INFO: Waiting for responses: map[] Jan 24 23:42:32.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-8226 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 24 23:42:32.452: INFO: >>> kubeConfig: /root/.kube/config I0124 23:42:32.512096 9 log.go:172] (0xc00060a160) (0xc002421e00) Create stream I0124 23:42:32.512282 9 log.go:172] (0xc00060a160) (0xc002421e00) Stream added, broadcasting: 1 I0124 23:42:32.517709 9 log.go:172] (0xc00060a160) Reply frame received for 1 I0124 23:42:32.517782 9 log.go:172] (0xc00060a160) (0xc002af46e0) Create stream I0124 23:42:32.517839 9 log.go:172] (0xc00060a160) (0xc002af46e0) Stream added, broadcasting: 3 I0124 23:42:32.519288 9 log.go:172] (0xc00060a160) Reply frame received for 3 I0124 23:42:32.519327 9 log.go:172] (0xc00060a160) (0xc0024afa40) Create stream I0124 23:42:32.519347 9 log.go:172] (0xc00060a160) (0xc0024afa40) Stream added, broadcasting: 5 I0124 23:42:32.520946 9 log.go:172] (0xc00060a160) Reply frame received for 5 I0124 23:42:32.637264 9 log.go:172] (0xc00060a160) Data frame received for 3 I0124 23:42:32.637698 9 log.go:172] (0xc002af46e0) (3) Data frame handling I0124 23:42:32.637759 9 log.go:172] (0xc002af46e0) (3) Data frame sent I0124 23:42:32.744712 9 log.go:172] (0xc00060a160) (0xc002af46e0) Stream removed, broadcasting: 3 I0124 23:42:32.745127 9 log.go:172] (0xc00060a160) Data frame received for 1 I0124 23:42:32.745363 9 log.go:172] (0xc002421e00) (1) Data frame handling I0124 23:42:32.745574 9 log.go:172] (0xc002421e00) (1) Data frame sent I0124 23:42:32.745702 9 log.go:172] (0xc00060a160) (0xc0024afa40) Stream removed, broadcasting: 5 I0124 23:42:32.745971 9 log.go:172] (0xc00060a160) (0xc002421e00) Stream removed, broadcasting: 1 I0124 23:42:32.746096 9 log.go:172] (0xc00060a160) Go away received I0124 23:42:32.747072 9 log.go:172] (0xc00060a160) (0xc002421e00) Stream removed, broadcasting: 1 I0124 23:42:32.747136 9 log.go:172] (0xc00060a160) (0xc002af46e0) Stream removed, broadcasting: 3 I0124 23:42:32.747151 9 log.go:172] (0xc00060a160) (0xc0024afa40) Stream removed, broadcasting: 5 Jan 24 23:42:32.747: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:42:32.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8226" for this suite. • [SLOW TEST:33.041 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":209,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:42:32.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 24 23:42:32.884: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:42:52.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-795" for this suite. • [SLOW TEST:19.970 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":10,"skipped":216,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:42:52.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 24 23:42:53.304: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 24 23:42:55.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:42:57.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:42:59.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506173, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 24 23:43:02.417: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:43:12.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1230" for this suite. STEP: Destroying namespace "webhook-1230-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.190 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":11,"skipped":260,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:43:12.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:43:13.015: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 24 23:43:13.093: INFO: Number of nodes with available pods: 0 Jan 24 23:43:13.093: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:15.769: INFO: Number of nodes with available pods: 0 Jan 24 23:43:15.769: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:16.684: INFO: Number of nodes with available pods: 0 Jan 24 23:43:16.684: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:17.121: INFO: Number of nodes with available pods: 0 Jan 24 23:43:17.121: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:18.133: INFO: Number of nodes with available pods: 0 Jan 24 23:43:18.134: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:19.226: INFO: Number of nodes with available pods: 0 Jan 24 23:43:19.226: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:21.198: INFO: Number of nodes with available pods: 0 Jan 24 23:43:21.198: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:22.878: INFO: Number of nodes with available pods: 0 Jan 24 23:43:22.878: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:23.122: INFO: Number of nodes with available pods: 0 Jan 24 23:43:23.122: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:24.110: INFO: Number of nodes with available pods: 1 Jan 24 23:43:24.110: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:25.102: INFO: Number of nodes with available pods: 2 Jan 24 23:43:25.102: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 24 23:43:25.179: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:25.179: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:26.205: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:26.205: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:27.205: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:27.205: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:28.214: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:28.214: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:29.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:29.201: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:30.200: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:30.200: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:31.202: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:31.202: INFO: Wrong image for pod: daemon-set-sznvm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:31.202: INFO: Pod daemon-set-sznvm is not available Jan 24 23:43:32.439: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:32.439: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:33.210: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:33.210: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:34.205: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:34.205: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:36.738: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:36.739: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:37.685: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:37.685: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:38.202: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:38.202: INFO: Pod daemon-set-tmvhx is not available Jan 24 23:43:39.199: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:40.204: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:41.270: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:42.203: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:43.241: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:43.241: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:44.203: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:44.203: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:45.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:45.201: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:46.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:46.201: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:47.200: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:47.200: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:48.202: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:48.202: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:49.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:49.201: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:50.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:50.201: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:51.201: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:51.201: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:52.204: INFO: Wrong image for pod: daemon-set-7tdbj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Jan 24 23:43:52.204: INFO: Pod daemon-set-7tdbj is not available Jan 24 23:43:53.202: INFO: Pod daemon-set-dfsnh is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 24 23:43:53.217: INFO: Number of nodes with available pods: 1 Jan 24 23:43:53.217: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:54.225: INFO: Number of nodes with available pods: 1 Jan 24 23:43:54.225: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:55.227: INFO: Number of nodes with available pods: 1 Jan 24 23:43:55.227: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:56.268: INFO: Number of nodes with available pods: 1 Jan 24 23:43:56.268: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:57.239: INFO: Number of nodes with available pods: 1 Jan 24 23:43:57.239: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:58.230: INFO: Number of nodes with available pods: 1 Jan 24 23:43:58.230: INFO: Node jerma-node is running more than one daemon pod Jan 24 23:43:59.227: INFO: Number of nodes with available pods: 2 Jan 24 23:43:59.227: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8664, will wait for the garbage collector to delete the pods Jan 24 23:43:59.356: INFO: Deleting DaemonSet.extensions daemon-set took: 45.13504ms Jan 24 23:43:59.757: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.374272ms Jan 24 23:44:13.167: INFO: Number of nodes with available pods: 0 Jan 24 23:44:13.167: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 23:44:13.171: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8664/daemonsets","resourceVersion":"4115048"},"items":null} Jan 24 23:44:13.174: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8664/pods","resourceVersion":"4115048"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:44:13.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8664" for this suite. • [SLOW TEST:60.259 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":12,"skipped":289,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:44:13.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 24 23:44:27.371: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 23:44:27.384: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 23:44:29.384: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 23:44:29.392: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 23:44:31.384: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 23:44:31.397: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:44:31.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-335" for this suite. • [SLOW TEST:18.261 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":292,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:44:31.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-585cc1b8-83d4-4ce8-9735-327038f0e0e5 STEP: Creating a pod to test consume configMaps Jan 24 23:44:31.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3" in namespace "projected-449" to be "success or failure" Jan 24 23:44:31.645: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.055401ms Jan 24 23:44:33.656: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017709762s Jan 24 23:44:35.663: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025012417s Jan 24 23:44:37.670: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032069177s Jan 24 23:44:39.677: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038227533s Jan 24 23:44:41.684: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04612157s STEP: Saw pod success Jan 24 23:44:41.685: INFO: Pod "pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3" satisfied condition "success or failure" Jan 24 23:44:41.691: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3 container projected-configmap-volume-test: STEP: delete the pod Jan 24 23:44:41.742: INFO: Waiting for pod pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3 to disappear Jan 24 23:44:41.753: INFO: Pod pod-projected-configmaps-0652b887-c3e8-42b9-9cd5-9f757bae3fa3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:44:41.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-449" for this suite. • [SLOW TEST:10.299 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":293,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:44:41.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1693 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 24 23:44:41.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6291' Jan 24 23:44:43.981: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 24 23:44:43.981: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 24 23:44:44.055: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 24 23:44:44.059: INFO: scanned /root for discovery docs: Jan 24 23:44:44.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6291' Jan 24 23:45:05.520: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 24 23:45:05.520: INFO: stdout: "Created e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1\nScaling up e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 24 23:45:05.520: INFO: stdout: "Created e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1\nScaling up e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 24 23:45:05.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6291' Jan 24 23:45:05.734: INFO: stderr: "" Jan 24 23:45:05.734: INFO: stdout: "e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1-l4z2n " Jan 24 23:45:05.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1-l4z2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6291' Jan 24 23:45:05.839: INFO: stderr: "" Jan 24 23:45:05.839: INFO: stdout: "true" Jan 24 23:45:05.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1-l4z2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6291' Jan 24 23:45:05.990: INFO: stderr: "" Jan 24 23:45:05.990: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 24 23:45:05.990: INFO: e2e-test-httpd-rc-5cce45b022e1f6b6b7852dcb744057a1-l4z2n is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1699 Jan 24 23:45:05.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6291' Jan 24 23:45:06.135: INFO: stderr: "" Jan 24 23:45:06.135: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:06.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6291" for this suite. • [SLOW TEST:24.424 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1688 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":15,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:06.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 24 23:45:06.253: INFO: >>> kubeConfig: /root/.kube/config Jan 24 23:45:09.748: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8748" for this suite. • [SLOW TEST:16.188 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":16,"skipped":323,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:22.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:22.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-364" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":17,"skipped":328,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:22.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 23:45:22.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84" in namespace "projected-8800" to be "success or failure" Jan 24 23:45:22.655: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 20.895617ms Jan 24 23:45:24.664: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02990576s Jan 24 23:45:26.680: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045913848s Jan 24 23:45:28.687: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052886649s Jan 24 23:45:30.692: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058342688s STEP: Saw pod success Jan 24 23:45:30.692: INFO: Pod "downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84" satisfied condition "success or failure" Jan 24 23:45:30.697: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84 container client-container: STEP: delete the pod Jan 24 23:45:30.759: INFO: Waiting for pod downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84 to disappear Jan 24 23:45:30.763: INFO: Pod downwardapi-volume-9fcf199a-b811-44b2-9a92-209bb43e9e84 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:30.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8800" for this suite. • [SLOW TEST:8.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":338,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:30.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 24 23:45:39.439: INFO: Successfully updated pod "pod-update-00650ac0-df8c-4567-8c2c-32cda474210c" STEP: verifying the updated pod is in kubernetes Jan 24 23:45:39.469: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:39.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3658" for this suite. • [SLOW TEST:8.700 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:39.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: set up a multi version CRD Jan 24 23:45:39.592: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:57.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6216" for this suite. • [SLOW TEST:18.470 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":20,"skipped":398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:57.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:45:58.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9545" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":424,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:45:58.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:45:58.290: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:04.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5270" for this suite. • [SLOW TEST:5.885 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":22,"skipped":434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:04.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:46:04.150: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 24 23:46:04.255: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 24 23:46:09.316: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 24 23:46:11.331: INFO: Creating deployment "test-rolling-update-deployment" Jan 24 23:46:11.342: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 24 23:46:11.366: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 24 23:46:13.378: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 24 23:46:13.382: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:46:15.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:46:17.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715506371, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 23:46:19.389: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 24 23:46:19.408: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-590 /apis/apps/v1/namespaces/deployment-590/deployments/test-rolling-update-deployment c1bd2651-b714-44ff-b350-cff1a5fae869 4115744 1 2020-01-24 23:46:11 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a32f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-24 23:46:11 +0000 UTC,LastTransitionTime:2020-01-24 23:46:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-24 23:46:18 +0000 UTC,LastTransitionTime:2020-01-24 23:46:11 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 23:46:19.415: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-590 /apis/apps/v1/namespaces/deployment-590/replicasets/test-rolling-update-deployment-67cf4f6444 7c261ce8-8507-4f6b-ba7a-258aad333ca5 4115734 1 2020-01-24 23:46:11 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c1bd2651-b714-44ff-b350-cff1a5fae869 0xc004a33467 0xc004a33468}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a334e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 23:46:19.415: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 24 23:46:19.415: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-590 /apis/apps/v1/namespaces/deployment-590/replicasets/test-rolling-update-controller 84e1c627-f371-4474-ba27-221e28d33acf 4115743 2 2020-01-24 23:46:04 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c1bd2651-b714-44ff-b350-cff1a5fae869 0xc004a33387 0xc004a33388}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004a333e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 23:46:19.421: INFO: Pod "test-rolling-update-deployment-67cf4f6444-fz4kc" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-fz4kc test-rolling-update-deployment-67cf4f6444- deployment-590 /api/v1/namespaces/deployment-590/pods/test-rolling-update-deployment-67cf4f6444-fz4kc bdfc873d-a4bf-4aae-b30d-977f876015cd 4115733 0 2020-01-24 23:46:11 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 7c261ce8-8507-4f6b-ba7a-258aad333ca5 0xc0049da887 0xc0049da888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x762r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x762r,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x762r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 23:46:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 23:46:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 23:46:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-24 23:46:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-24 23:46:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-24 23:46:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f101891e1767cfbb269af902c96bb424576ed9cdf9ebaf310cf2378255aa76ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:19.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-590" for this suite. • [SLOW TEST:15.353 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":23,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:19.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 23:46:19.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975" in namespace "projected-8893" to be "success or failure" Jan 24 23:46:19.678: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Pending", Reason="", readiness=false. Elapsed: 58.655812ms Jan 24 23:46:21.687: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067637612s Jan 24 23:46:23.694: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073877218s Jan 24 23:46:25.698: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078630524s Jan 24 23:46:27.705: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085395226s Jan 24 23:46:29.713: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093415491s STEP: Saw pod success Jan 24 23:46:29.713: INFO: Pod "downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975" satisfied condition "success or failure" Jan 24 23:46:29.718: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975 container client-container: STEP: delete the pod Jan 24 23:46:30.003: INFO: Waiting for pod downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975 to disappear Jan 24 23:46:30.010: INFO: Pod downwardapi-volume-18550f6c-e230-41eb-ae74-0711ca844975 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:30.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8893" for this suite. • [SLOW TEST:10.590 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":479,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:30.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:46:30.192: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513" in namespace "security-context-test-1530" to be "success or failure" Jan 24 23:46:30.204: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513": Phase="Pending", Reason="", readiness=false. Elapsed: 11.798294ms Jan 24 23:46:32.211: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019532597s Jan 24 23:46:34.218: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026123026s Jan 24 23:46:36.252: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059831258s Jan 24 23:46:38.260: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068093765s Jan 24 23:46:38.260: INFO: Pod "busybox-user-65534-e32b0a88-4dcc-4788-82de-b24139180513" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:38.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1530" for this suite. • [SLOW TEST:8.250 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:38.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 24 23:46:38.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1" in namespace "downward-api-8364" to be "success or failure" Jan 24 23:46:38.679: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.643463ms Jan 24 23:46:40.687: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057326716s Jan 24 23:46:42.696: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065808045s Jan 24 23:46:44.825: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194575044s Jan 24 23:46:46.831: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200814315s STEP: Saw pod success Jan 24 23:46:46.831: INFO: Pod "downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1" satisfied condition "success or failure" Jan 24 23:46:46.838: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1 container client-container: STEP: delete the pod Jan 24 23:46:46.920: INFO: Waiting for pod downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1 to disappear Jan 24 23:46:46.933: INFO: Pod downwardapi-volume-64346ffd-c08f-4bd4-a5af-44ea520202c1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:46.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8364" for this suite. • [SLOW TEST:8.682 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":514,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:46.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 24 23:46:47.091: INFO: Waiting up to 5m0s for pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc" in namespace "emptydir-8285" to be "success or failure" Jan 24 23:46:47.110: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.572338ms Jan 24 23:46:49.116: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024795871s Jan 24 23:46:51.126: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034904747s Jan 24 23:46:53.134: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04309271s Jan 24 23:46:55.138: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047039326s STEP: Saw pod success Jan 24 23:46:55.138: INFO: Pod "pod-48c3f837-1d88-45b6-b317-031a8f783efc" satisfied condition "success or failure" Jan 24 23:46:55.140: INFO: Trying to get logs from node jerma-node pod pod-48c3f837-1d88-45b6-b317-031a8f783efc container test-container: STEP: delete the pod Jan 24 23:46:55.216: INFO: Waiting for pod pod-48c3f837-1d88-45b6-b317-031a8f783efc to disappear Jan 24 23:46:55.224: INFO: Pod pod-48c3f837-1d88-45b6-b317-031a8f783efc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:46:55.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8285" for this suite. • [SLOW TEST:8.279 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":515,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:46:55.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token STEP: reading a file in the container Jan 24 23:47:04.004: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5302 pod-service-account-1c37c1ac-3b48-4a62-8715-cb199e1e5684 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 24 23:47:04.359: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5302 pod-service-account-1c37c1ac-3b48-4a62-8715-cb199e1e5684 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 24 23:47:04.710: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5302 pod-service-account-1c37c1ac-3b48-4a62-8715-cb199e1e5684 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:47:05.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5302" for this suite. • [SLOW TEST:9.843 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":28,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:47:05.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jan 24 23:47:05.181: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4115994 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 24 23:47:05.181: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4115995 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 24 23:47:05.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4115996 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jan 24 23:47:15.244: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4116035 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 24 23:47:15.245: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4116036 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jan 24 23:47:15.245: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8677 /api/v1/namespaces/watch-8677/configmaps/e2e-watch-test-label-changed 66c430ce-de89-4d29-a3dc-e076fae1e2fa 4116037 0 2020-01-24 23:47:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:47:15.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8677" for this suite. • [SLOW TEST:10.184 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":29,"skipped":567,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:47:15.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 24 23:47:15.895: INFO: Waiting up to 5m0s for pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3" in namespace "emptydir-6834" to be "success or failure" Jan 24 23:47:15.901: INFO: Pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.82348ms Jan 24 23:47:17.924: INFO: Pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029278462s Jan 24 23:47:19.936: INFO: Pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040506644s Jan 24 23:47:21.942: INFO: Pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046557369s STEP: Saw pod success Jan 24 23:47:21.942: INFO: Pod "pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3" satisfied condition "success or failure" Jan 24 23:47:21.945: INFO: Trying to get logs from node jerma-node pod pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3 container test-container: STEP: delete the pod Jan 24 23:47:21.982: INFO: Waiting for pod pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3 to disappear Jan 24 23:47:21.996: INFO: Pod pod-c25807e8-e2d8-466d-b553-cb4ac24d7ee3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:47:21.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6834" for this suite. • [SLOW TEST:6.736 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":569,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:47:22.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 24 23:47:29.286: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:47:29.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1560" for this suite. • [SLOW TEST:7.373 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:47:29.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 24 23:47:38.625: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:47:39.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4795" for this suite. • [SLOW TEST:10.320 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":32,"skipped":612,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:47:39.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7153 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Jan 24 23:47:39.907: INFO: Found 0 stateful pods, waiting for 3 Jan 24 23:47:49.914: INFO: Found 1 stateful pods, waiting for 3 Jan 24 23:47:59.923: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 23:47:59.923: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 23:47:59.923: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 23:48:09.915: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 23:48:09.915: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 23:48:09.915: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 24 23:48:09.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 23:48:10.287: INFO: stderr: "I0124 23:48:10.077897 225 log.go:172] (0xc000a1e0b0) (0xc0009d2000) Create stream\nI0124 23:48:10.078074 225 log.go:172] (0xc000a1e0b0) (0xc0009d2000) Stream added, broadcasting: 1\nI0124 23:48:10.081896 225 log.go:172] (0xc000a1e0b0) Reply frame received for 1\nI0124 23:48:10.081942 225 log.go:172] (0xc000a1e0b0) (0xc0009d20a0) Create stream\nI0124 23:48:10.081963 225 log.go:172] (0xc000a1e0b0) (0xc0009d20a0) Stream added, broadcasting: 3\nI0124 23:48:10.083553 225 log.go:172] (0xc000a1e0b0) Reply frame received for 3\nI0124 23:48:10.083592 225 log.go:172] (0xc000a1e0b0) (0xc000984000) Create stream\nI0124 23:48:10.083607 225 log.go:172] (0xc000a1e0b0) (0xc000984000) Stream added, broadcasting: 5\nI0124 23:48:10.087997 225 log.go:172] (0xc000a1e0b0) Reply frame received for 5\nI0124 23:48:10.161918 225 log.go:172] (0xc000a1e0b0) Data frame received for 5\nI0124 23:48:10.162050 225 log.go:172] (0xc000984000) (5) Data frame handling\nI0124 23:48:10.162069 225 log.go:172] (0xc000984000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 23:48:10.192601 225 log.go:172] (0xc000a1e0b0) Data frame received for 3\nI0124 23:48:10.192641 225 log.go:172] (0xc0009d20a0) (3) Data frame handling\nI0124 23:48:10.192662 225 log.go:172] (0xc0009d20a0) (3) Data frame sent\nI0124 23:48:10.280028 225 log.go:172] (0xc000a1e0b0) (0xc000984000) Stream removed, broadcasting: 5\nI0124 23:48:10.280158 225 log.go:172] (0xc000a1e0b0) Data frame received for 1\nI0124 23:48:10.280178 225 log.go:172] (0xc0009d2000) (1) Data frame handling\nI0124 23:48:10.280190 225 log.go:172] (0xc0009d2000) (1) Data frame sent\nI0124 23:48:10.280220 225 log.go:172] (0xc000a1e0b0) (0xc0009d2000) Stream removed, broadcasting: 1\nI0124 23:48:10.280935 225 log.go:172] (0xc000a1e0b0) (0xc0009d20a0) Stream removed, broadcasting: 3\nI0124 23:48:10.281643 225 log.go:172] (0xc000a1e0b0) Go away received\nI0124 23:48:10.281911 225 log.go:172] (0xc000a1e0b0) (0xc0009d2000) Stream removed, broadcasting: 1\nI0124 23:48:10.282044 225 log.go:172] (0xc000a1e0b0) (0xc0009d20a0) Stream removed, broadcasting: 3\nI0124 23:48:10.282108 225 log.go:172] (0xc000a1e0b0) (0xc000984000) Stream removed, broadcasting: 5\n" Jan 24 23:48:10.287: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 23:48:10.287: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 24 23:48:20.340: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 24 23:48:30.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 23:48:31.007: INFO: stderr: "I0124 23:48:30.795609 243 log.go:172] (0xc0007b8790) (0xc000743540) Create stream\nI0124 23:48:30.796164 243 log.go:172] (0xc0007b8790) (0xc000743540) Stream added, broadcasting: 1\nI0124 23:48:30.800045 243 log.go:172] (0xc0007b8790) Reply frame received for 1\nI0124 23:48:30.800220 243 log.go:172] (0xc0007b8790) (0xc00080a000) Create stream\nI0124 23:48:30.800237 243 log.go:172] (0xc0007b8790) (0xc00080a000) Stream added, broadcasting: 3\nI0124 23:48:30.801979 243 log.go:172] (0xc0007b8790) Reply frame received for 3\nI0124 23:48:30.802013 243 log.go:172] (0xc0007b8790) (0xc0007437c0) Create stream\nI0124 23:48:30.802026 243 log.go:172] (0xc0007b8790) (0xc0007437c0) Stream added, broadcasting: 5\nI0124 23:48:30.803269 243 log.go:172] (0xc0007b8790) Reply frame received for 5\nI0124 23:48:30.882090 243 log.go:172] (0xc0007b8790) Data frame received for 5\nI0124 23:48:30.882210 243 log.go:172] (0xc0007437c0) (5) Data frame handling\nI0124 23:48:30.882240 243 log.go:172] (0xc0007437c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 23:48:30.882894 243 log.go:172] (0xc0007b8790) Data frame received for 3\nI0124 23:48:30.882923 243 log.go:172] (0xc00080a000) (3) Data frame handling\nI0124 23:48:30.882950 243 log.go:172] (0xc00080a000) (3) Data frame sent\nI0124 23:48:30.995095 243 log.go:172] (0xc0007b8790) (0xc00080a000) Stream removed, broadcasting: 3\nI0124 23:48:30.995634 243 log.go:172] (0xc0007b8790) Data frame received for 1\nI0124 23:48:30.995831 243 log.go:172] (0xc0007b8790) (0xc0007437c0) Stream removed, broadcasting: 5\nI0124 23:48:30.996230 243 log.go:172] (0xc000743540) (1) Data frame handling\nI0124 23:48:30.996475 243 log.go:172] (0xc000743540) (1) Data frame sent\nI0124 23:48:30.996627 243 log.go:172] (0xc0007b8790) (0xc000743540) Stream removed, broadcasting: 1\nI0124 23:48:30.996792 243 log.go:172] (0xc0007b8790) Go away received\nI0124 23:48:30.998966 243 log.go:172] (0xc0007b8790) (0xc000743540) Stream removed, broadcasting: 1\nI0124 23:48:30.998998 243 log.go:172] (0xc0007b8790) (0xc00080a000) Stream removed, broadcasting: 3\nI0124 23:48:30.999009 243 log.go:172] (0xc0007b8790) (0xc0007437c0) Stream removed, broadcasting: 5\n" Jan 24 23:48:31.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 23:48:31.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 23:48:41.037: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:48:41.037: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:48:41.037: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:48:41.037: INFO: Waiting for Pod statefulset-7153/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:48:51.062: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:48:51.062: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:48:51.062: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:49:01.051: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:49:01.051: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:49:01.051: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:49:11.083: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:49:11.083: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 24 23:49:21.050: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:49:21.050: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 24 23:49:31.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 23:49:31.561: INFO: stderr: "I0124 23:49:31.301193 261 log.go:172] (0xc0006de8f0) (0xc000703f40) Create stream\nI0124 23:49:31.301407 261 log.go:172] (0xc0006de8f0) (0xc000703f40) Stream added, broadcasting: 1\nI0124 23:49:31.319975 261 log.go:172] (0xc0006de8f0) Reply frame received for 1\nI0124 23:49:31.320276 261 log.go:172] (0xc0006de8f0) (0xc00068a820) Create stream\nI0124 23:49:31.320328 261 log.go:172] (0xc0006de8f0) (0xc00068a820) Stream added, broadcasting: 3\nI0124 23:49:31.324923 261 log.go:172] (0xc0006de8f0) Reply frame received for 3\nI0124 23:49:31.324996 261 log.go:172] (0xc0006de8f0) (0xc0004df4a0) Create stream\nI0124 23:49:31.325010 261 log.go:172] (0xc0006de8f0) (0xc0004df4a0) Stream added, broadcasting: 5\nI0124 23:49:31.326107 261 log.go:172] (0xc0006de8f0) Reply frame received for 5\nI0124 23:49:31.403682 261 log.go:172] (0xc0006de8f0) Data frame received for 5\nI0124 23:49:31.403753 261 log.go:172] (0xc0004df4a0) (5) Data frame handling\nI0124 23:49:31.403780 261 log.go:172] (0xc0004df4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0124 23:49:31.435579 261 log.go:172] (0xc0006de8f0) Data frame received for 3\nI0124 23:49:31.435627 261 log.go:172] (0xc00068a820) (3) Data frame handling\nI0124 23:49:31.435648 261 log.go:172] (0xc00068a820) (3) Data frame sent\nI0124 23:49:31.545761 261 log.go:172] (0xc0006de8f0) Data frame received for 1\nI0124 23:49:31.545810 261 log.go:172] (0xc000703f40) (1) Data frame handling\nI0124 23:49:31.545840 261 log.go:172] (0xc000703f40) (1) Data frame sent\nI0124 23:49:31.549052 261 log.go:172] (0xc0006de8f0) (0xc000703f40) Stream removed, broadcasting: 1\nI0124 23:49:31.550979 261 log.go:172] (0xc0006de8f0) (0xc00068a820) Stream removed, broadcasting: 3\nI0124 23:49:31.551059 261 log.go:172] (0xc0006de8f0) (0xc0004df4a0) Stream removed, broadcasting: 5\nI0124 23:49:31.551087 261 log.go:172] (0xc0006de8f0) Go away received\nI0124 23:49:31.551142 261 log.go:172] (0xc0006de8f0) (0xc000703f40) Stream removed, broadcasting: 1\nI0124 23:49:31.551186 261 log.go:172] (0xc0006de8f0) (0xc00068a820) Stream removed, broadcasting: 3\nI0124 23:49:31.551195 261 log.go:172] (0xc0006de8f0) (0xc0004df4a0) Stream removed, broadcasting: 5\n" Jan 24 23:49:31.561: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 23:49:31.561: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 23:49:31.634: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 24 23:49:41.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 23:49:42.263: INFO: stderr: "I0124 23:49:41.975477 281 log.go:172] (0xc0000f54a0) (0xc000858000) Create stream\nI0124 23:49:41.975910 281 log.go:172] (0xc0000f54a0) (0xc000858000) Stream added, broadcasting: 1\nI0124 23:49:41.980653 281 log.go:172] (0xc0000f54a0) Reply frame received for 1\nI0124 23:49:41.980752 281 log.go:172] (0xc0000f54a0) (0xc0008580a0) Create stream\nI0124 23:49:41.980778 281 log.go:172] (0xc0000f54a0) (0xc0008580a0) Stream added, broadcasting: 3\nI0124 23:49:41.982428 281 log.go:172] (0xc0000f54a0) Reply frame received for 3\nI0124 23:49:41.982508 281 log.go:172] (0xc0000f54a0) (0xc0005a4000) Create stream\nI0124 23:49:41.982534 281 log.go:172] (0xc0000f54a0) (0xc0005a4000) Stream added, broadcasting: 5\nI0124 23:49:41.984908 281 log.go:172] (0xc0000f54a0) Reply frame received for 5\nI0124 23:49:42.085856 281 log.go:172] (0xc0000f54a0) Data frame received for 5\nI0124 23:49:42.086000 281 log.go:172] (0xc0005a4000) (5) Data frame handling\nI0124 23:49:42.086060 281 log.go:172] (0xc0005a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0124 23:49:42.086980 281 log.go:172] (0xc0000f54a0) Data frame received for 3\nI0124 23:49:42.086992 281 log.go:172] (0xc0008580a0) (3) Data frame handling\nI0124 23:49:42.087003 281 log.go:172] (0xc0008580a0) (3) Data frame sent\nI0124 23:49:42.222862 281 log.go:172] (0xc0000f54a0) Data frame received for 1\nI0124 23:49:42.223630 281 log.go:172] (0xc0000f54a0) (0xc0008580a0) Stream removed, broadcasting: 3\nI0124 23:49:42.223965 281 log.go:172] (0xc000858000) (1) Data frame handling\nI0124 23:49:42.224041 281 log.go:172] (0xc000858000) (1) Data frame sent\nI0124 23:49:42.224290 281 log.go:172] (0xc0000f54a0) (0xc0005a4000) Stream removed, broadcasting: 5\nI0124 23:49:42.224902 281 log.go:172] (0xc0000f54a0) (0xc000858000) Stream removed, broadcasting: 1\nI0124 23:49:42.225055 281 log.go:172] (0xc0000f54a0) Go away received\nI0124 23:49:42.229223 281 log.go:172] (0xc0000f54a0) (0xc000858000) Stream removed, broadcasting: 1\nI0124 23:49:42.229403 281 log.go:172] (0xc0000f54a0) (0xc0008580a0) Stream removed, broadcasting: 3\nI0124 23:49:42.229532 281 log.go:172] (0xc0000f54a0) (0xc0005a4000) Stream removed, broadcasting: 5\n" Jan 24 23:49:42.263: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 23:49:42.263: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 23:49:52.308: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:49:52.308: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:49:52.308: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:49:52.308: INFO: Waiting for Pod statefulset-7153/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:02.324: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:50:02.324: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:02.324: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:12.333: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:50:12.334: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:12.334: INFO: Waiting for Pod statefulset-7153/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:22.329: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:50:22.330: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 24 23:50:32.316: INFO: Waiting for StatefulSet statefulset-7153/ss2 to complete update Jan 24 23:50:32.316: INFO: Waiting for Pod statefulset-7153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 24 23:50:42.326: INFO: Deleting all statefulset in ns statefulset-7153 Jan 24 23:50:42.331: INFO: Scaling statefulset ss2 to 0 Jan 24 23:51:12.410: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 23:51:12.414: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:51:12.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7153" for this suite. • [SLOW TEST:212.771 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":33,"skipped":618,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:51:12.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Jan 24 23:51:12.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 24 23:51:12.759: INFO: stderr: "" Jan 24 23:51:12.759: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:51:12.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1008" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":34,"skipped":631,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:51:12.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:51:32.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3928" for this suite. STEP: Destroying namespace "nsdeletetest-3077" for this suite. Jan 24 23:51:32.244: INFO: Namespace nsdeletetest-3077 was already deleted STEP: Destroying namespace "nsdeletetest-8566" for this suite. • [SLOW TEST:19.486 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":35,"skipped":647,"failed":0} [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:51:32.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a replication controller Jan 24 23:51:32.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8192' Jan 24 23:51:32.846: INFO: stderr: "" Jan 24 23:51:32.846: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 23:51:32.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:33.102: INFO: stderr: "" Jan 24 23:51:33.102: INFO: stdout: "update-demo-nautilus-8zxhv update-demo-nautilus-knzzn " Jan 24 23:51:33.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zxhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:33.287: INFO: stderr: "" Jan 24 23:51:33.287: INFO: stdout: "" Jan 24 23:51:33.287: INFO: update-demo-nautilus-8zxhv is created but not running Jan 24 23:51:38.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:40.479: INFO: stderr: "" Jan 24 23:51:40.479: INFO: stdout: "update-demo-nautilus-8zxhv update-demo-nautilus-knzzn " Jan 24 23:51:40.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zxhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:40.636: INFO: stderr: "" Jan 24 23:51:40.636: INFO: stdout: "" Jan 24 23:51:40.636: INFO: update-demo-nautilus-8zxhv is created but not running Jan 24 23:51:45.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:45.803: INFO: stderr: "" Jan 24 23:51:45.803: INFO: stdout: "update-demo-nautilus-8zxhv update-demo-nautilus-knzzn " Jan 24 23:51:45.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zxhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:45.997: INFO: stderr: "" Jan 24 23:51:45.997: INFO: stdout: "true" Jan 24 23:51:45.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8zxhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:46.258: INFO: stderr: "" Jan 24 23:51:46.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 23:51:46.258: INFO: validating pod update-demo-nautilus-8zxhv Jan 24 23:51:46.271: INFO: got data: { "image": "nautilus.jpg" } Jan 24 23:51:46.271: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 23:51:46.271: INFO: update-demo-nautilus-8zxhv is verified up and running Jan 24 23:51:46.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:46.384: INFO: stderr: "" Jan 24 23:51:46.384: INFO: stdout: "true" Jan 24 23:51:46.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:46.513: INFO: stderr: "" Jan 24 23:51:46.513: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 23:51:46.513: INFO: validating pod update-demo-nautilus-knzzn Jan 24 23:51:46.521: INFO: got data: { "image": "nautilus.jpg" } Jan 24 23:51:46.522: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 23:51:46.522: INFO: update-demo-nautilus-knzzn is verified up and running STEP: scaling down the replication controller Jan 24 23:51:46.525: INFO: scanned /root for discovery docs: Jan 24 23:51:46.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8192' Jan 24 23:51:47.811: INFO: stderr: "" Jan 24 23:51:47.811: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 23:51:47.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:47.990: INFO: stderr: "" Jan 24 23:51:47.990: INFO: stdout: "update-demo-nautilus-8zxhv update-demo-nautilus-knzzn " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 24 23:51:52.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:53.165: INFO: stderr: "" Jan 24 23:51:53.165: INFO: stdout: "update-demo-nautilus-knzzn " Jan 24 23:51:53.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:53.277: INFO: stderr: "" Jan 24 23:51:53.277: INFO: stdout: "true" Jan 24 23:51:53.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:53.391: INFO: stderr: "" Jan 24 23:51:53.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 23:51:53.391: INFO: validating pod update-demo-nautilus-knzzn Jan 24 23:51:53.398: INFO: got data: { "image": "nautilus.jpg" } Jan 24 23:51:53.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 23:51:53.398: INFO: update-demo-nautilus-knzzn is verified up and running STEP: scaling up the replication controller Jan 24 23:51:53.400: INFO: scanned /root for discovery docs: Jan 24 23:51:53.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8192' Jan 24 23:51:54.552: INFO: stderr: "" Jan 24 23:51:54.552: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 24 23:51:54.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:54.699: INFO: stderr: "" Jan 24 23:51:54.699: INFO: stdout: "update-demo-nautilus-5qz6b update-demo-nautilus-knzzn " Jan 24 23:51:54.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qz6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:51:54.787: INFO: stderr: "" Jan 24 23:51:54.787: INFO: stdout: "" Jan 24 23:51:54.787: INFO: update-demo-nautilus-5qz6b is created but not running Jan 24 23:51:59.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:51:59.956: INFO: stderr: "" Jan 24 23:51:59.956: INFO: stdout: "update-demo-nautilus-5qz6b update-demo-nautilus-knzzn " Jan 24 23:51:59.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qz6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:52:00.061: INFO: stderr: "" Jan 24 23:52:00.062: INFO: stdout: "" Jan 24 23:52:00.062: INFO: update-demo-nautilus-5qz6b is created but not running Jan 24 23:52:05.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8192' Jan 24 23:52:05.207: INFO: stderr: "" Jan 24 23:52:05.207: INFO: stdout: "update-demo-nautilus-5qz6b update-demo-nautilus-knzzn " Jan 24 23:52:05.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qz6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:52:05.327: INFO: stderr: "" Jan 24 23:52:05.327: INFO: stdout: "true" Jan 24 23:52:05.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qz6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:52:05.425: INFO: stderr: "" Jan 24 23:52:05.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 23:52:05.425: INFO: validating pod update-demo-nautilus-5qz6b Jan 24 23:52:05.430: INFO: got data: { "image": "nautilus.jpg" } Jan 24 23:52:05.430: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 23:52:05.430: INFO: update-demo-nautilus-5qz6b is verified up and running Jan 24 23:52:05.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:52:05.529: INFO: stderr: "" Jan 24 23:52:05.529: INFO: stdout: "true" Jan 24 23:52:05.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-knzzn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8192' Jan 24 23:52:05.618: INFO: stderr: "" Jan 24 23:52:05.618: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 24 23:52:05.618: INFO: validating pod update-demo-nautilus-knzzn Jan 24 23:52:05.623: INFO: got data: { "image": "nautilus.jpg" } Jan 24 23:52:05.623: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 23:52:05.623: INFO: update-demo-nautilus-knzzn is verified up and running STEP: using delete to clean up resources Jan 24 23:52:05.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8192' Jan 24 23:52:05.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 23:52:05.735: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 24 23:52:05.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8192' Jan 24 23:52:05.841: INFO: stderr: "No resources found in kubectl-8192 namespace.\n" Jan 24 23:52:05.841: INFO: stdout: "" Jan 24 23:52:05.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8192 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 23:52:05.937: INFO: stderr: "" Jan 24 23:52:05.937: INFO: stdout: "update-demo-nautilus-5qz6b\nupdate-demo-nautilus-knzzn\n" Jan 24 23:52:06.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8192' Jan 24 23:52:07.430: INFO: stderr: "No resources found in kubectl-8192 namespace.\n" Jan 24 23:52:07.430: INFO: stdout: "" Jan 24 23:52:07.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8192 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 23:52:07.723: INFO: stderr: "" Jan 24 23:52:07.723: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:52:07.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8192" for this suite. • [SLOW TEST:35.483 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":36,"skipped":647,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:52:07.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 24 23:52:07.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:52:10.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3617" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":37,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:52:11.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Starting the proxy Jan 24 23:52:11.104: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix037740080/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:52:11.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7536" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":38,"skipped":688,"failed":0} S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:52:11.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8584.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8584.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8584.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8584.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8584.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8584.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 24 23:52:23.425: INFO: DNS probes using dns-8584/dns-test-c9f7948a-b03c-475f-9a93-58a908470b29 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:52:23.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8584" for this suite. • [SLOW TEST:12.337 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":39,"skipped":689,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:52:23.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 24 23:52:24.969: INFO: Pod name wrapped-volume-race-b9a07853-fcf2-4b34-abdc-ccb271c56b68: Found 0 pods out of 5 Jan 24 23:52:29.979: INFO: Pod name wrapped-volume-race-b9a07853-fcf2-4b34-abdc-ccb271c56b68: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b9a07853-fcf2-4b34-abdc-ccb271c56b68 in namespace emptydir-wrapper-7852, will wait for the garbage collector to delete the pods Jan 24 23:53:00.089: INFO: Deleting ReplicationController wrapped-volume-race-b9a07853-fcf2-4b34-abdc-ccb271c56b68 took: 14.270393ms Jan 24 23:53:00.489: INFO: Terminating ReplicationController wrapped-volume-race-b9a07853-fcf2-4b34-abdc-ccb271c56b68 pods took: 400.928994ms STEP: Creating RC which spawns configmap-volume pods Jan 24 23:53:13.370: INFO: Pod name wrapped-volume-race-05209bf7-76f1-4c6d-8006-eddd47f4994a: Found 0 pods out of 5 Jan 24 23:53:18.399: INFO: Pod name wrapped-volume-race-05209bf7-76f1-4c6d-8006-eddd47f4994a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05209bf7-76f1-4c6d-8006-eddd47f4994a in namespace emptydir-wrapper-7852, will wait for the garbage collector to delete the pods Jan 24 23:53:46.498: INFO: Deleting ReplicationController wrapped-volume-race-05209bf7-76f1-4c6d-8006-eddd47f4994a took: 12.922153ms Jan 24 23:53:46.899: INFO: Terminating ReplicationController wrapped-volume-race-05209bf7-76f1-4c6d-8006-eddd47f4994a pods took: 400.771663ms STEP: Creating RC which spawns configmap-volume pods Jan 24 23:54:02.545: INFO: Pod name wrapped-volume-race-fc7f0159-a5f2-43eb-9409-e6b494f68587: Found 0 pods out of 5 Jan 24 23:54:07.554: INFO: Pod name wrapped-volume-race-fc7f0159-a5f2-43eb-9409-e6b494f68587: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fc7f0159-a5f2-43eb-9409-e6b494f68587 in namespace emptydir-wrapper-7852, will wait for the garbage collector to delete the pods Jan 24 23:54:35.651: INFO: Deleting ReplicationController wrapped-volume-race-fc7f0159-a5f2-43eb-9409-e6b494f68587 took: 9.137612ms Jan 24 23:54:36.152: INFO: Terminating ReplicationController wrapped-volume-race-fc7f0159-a5f2-43eb-9409-e6b494f68587 pods took: 500.652575ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:54:53.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7852" for this suite. • [SLOW TEST:149.915 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":40,"skipped":700,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:54:53.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-5944a5bc-d463-4baa-91d3-1b5efee913f2 STEP: Creating a pod to test consume configMaps Jan 24 23:54:53.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0" in namespace "projected-3162" to be "success or failure" Jan 24 23:54:53.937: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.357782ms Jan 24 23:54:55.946: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0295627s Jan 24 23:54:57.956: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038683487s Jan 24 23:55:00.004: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087060624s Jan 24 23:55:02.011: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09459218s Jan 24 23:55:04.017: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100629294s Jan 24 23:55:06.023: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105987607s Jan 24 23:55:08.029: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.112614096s STEP: Saw pod success Jan 24 23:55:08.029: INFO: Pod "pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0" satisfied condition "success or failure" Jan 24 23:55:08.032: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0 container projected-configmap-volume-test: STEP: delete the pod Jan 24 23:55:08.098: INFO: Waiting for pod pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0 to disappear Jan 24 23:55:08.105: INFO: Pod pod-projected-configmaps-fb475d6e-928a-4cb6-806c-d0c3d1b9c7a0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:55:08.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3162" for this suite. • [SLOW TEST:14.664 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":713,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:55:08.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-ce709046-1f56-4164-b4ae-ccd816b38313 in namespace container-probe-973 Jan 24 23:55:14.305: INFO: Started pod busybox-ce709046-1f56-4164-b4ae-ccd816b38313 in namespace container-probe-973 STEP: checking the pod's current state and verifying that restartCount is present Jan 24 23:55:14.312: INFO: Initial restart count of pod busybox-ce709046-1f56-4164-b4ae-ccd816b38313 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:59:15.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-973" for this suite. • [SLOW TEST:247.824 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":725,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:59:15.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 24 23:59:24.674: INFO: Successfully updated pod "labelsupdate7aef3a54-e692-4558-b447-e797fe8ac1d2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:59:26.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4877" for this suite. • [SLOW TEST:10.834 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":732,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:59:26.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating Agnhost RC Jan 24 23:59:26.893: INFO: namespace kubectl-9889 Jan 24 23:59:26.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9889' Jan 24 23:59:29.222: INFO: stderr: "" Jan 24 23:59:29.222: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jan 24 23:59:30.230: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:30.230: INFO: Found 0 / 1 Jan 24 23:59:31.230: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:31.230: INFO: Found 0 / 1 Jan 24 23:59:32.254: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:32.254: INFO: Found 0 / 1 Jan 24 23:59:33.229: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:33.229: INFO: Found 0 / 1 Jan 24 23:59:34.312: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:34.312: INFO: Found 0 / 1 Jan 24 23:59:35.230: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:35.230: INFO: Found 0 / 1 Jan 24 23:59:36.229: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:36.229: INFO: Found 0 / 1 Jan 24 23:59:37.227: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:37.227: INFO: Found 0 / 1 Jan 24 23:59:38.233: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:38.233: INFO: Found 0 / 1 Jan 24 23:59:39.233: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:39.234: INFO: Found 1 / 1 Jan 24 23:59:39.234: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 24 23:59:39.240: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 23:59:39.240: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 23:59:39.240: INFO: wait on agnhost-master startup in kubectl-9889 Jan 24 23:59:39.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-zgkzg agnhost-master --namespace=kubectl-9889' Jan 24 23:59:39.453: INFO: stderr: "" Jan 24 23:59:39.453: INFO: stdout: "Paused\n" STEP: exposing RC Jan 24 23:59:39.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9889' Jan 24 23:59:39.663: INFO: stderr: "" Jan 24 23:59:39.663: INFO: stdout: "service/rm2 exposed\n" Jan 24 23:59:39.667: INFO: Service rm2 in namespace kubectl-9889 found. STEP: exposing service Jan 24 23:59:41.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9889' Jan 24 23:59:41.907: INFO: stderr: "" Jan 24 23:59:41.907: INFO: stdout: "service/rm3 exposed\n" Jan 24 23:59:41.923: INFO: Service rm3 in namespace kubectl-9889 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 24 23:59:43.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9889" for this suite. • [SLOW TEST:17.176 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1296 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":44,"skipped":742,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 24 23:59:43.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7899 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 24 23:59:44.074: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 00:00:20.301: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 00:00:20.301: INFO: >>> kubeConfig: /root/.kube/config I0125 00:00:20.371102 9 log.go:172] (0xc000af4000) (0xc001c4ab40) Create stream I0125 00:00:20.371168 9 log.go:172] (0xc000af4000) (0xc001c4ab40) Stream added, broadcasting: 1 I0125 00:00:20.376252 9 log.go:172] (0xc000af4000) Reply frame received for 1 I0125 00:00:20.376304 9 log.go:172] (0xc000af4000) (0xc002af4140) Create stream I0125 00:00:20.376324 9 log.go:172] (0xc000af4000) (0xc002af4140) Stream added, broadcasting: 3 I0125 00:00:20.378002 9 log.go:172] (0xc000af4000) Reply frame received for 3 I0125 00:00:20.378031 9 log.go:172] (0xc000af4000) (0xc002af4280) Create stream I0125 00:00:20.378042 9 log.go:172] (0xc000af4000) (0xc002af4280) Stream added, broadcasting: 5 I0125 00:00:20.379876 9 log.go:172] (0xc000af4000) Reply frame received for 5 I0125 00:00:20.509898 9 log.go:172] (0xc000af4000) Data frame received for 3 I0125 00:00:20.510104 9 log.go:172] (0xc002af4140) (3) Data frame handling I0125 00:00:20.510169 9 log.go:172] (0xc002af4140) (3) Data frame sent I0125 00:00:20.638105 9 log.go:172] (0xc000af4000) Data frame received for 1 I0125 00:00:20.638441 9 log.go:172] (0xc001c4ab40) (1) Data frame handling I0125 00:00:20.638527 9 log.go:172] (0xc001c4ab40) (1) Data frame sent I0125 00:00:20.641179 9 log.go:172] (0xc000af4000) (0xc002af4140) Stream removed, broadcasting: 3 I0125 00:00:20.641379 9 log.go:172] (0xc000af4000) (0xc002af4280) Stream removed, broadcasting: 5 I0125 00:00:20.641457 9 log.go:172] (0xc000af4000) (0xc001c4ab40) Stream removed, broadcasting: 1 I0125 00:00:20.641535 9 log.go:172] (0xc000af4000) Go away received I0125 00:00:20.641933 9 log.go:172] (0xc000af4000) (0xc001c4ab40) Stream removed, broadcasting: 1 I0125 00:00:20.641972 9 log.go:172] (0xc000af4000) (0xc002af4140) Stream removed, broadcasting: 3 I0125 00:00:20.641986 9 log.go:172] (0xc000af4000) (0xc002af4280) Stream removed, broadcasting: 5 Jan 25 00:00:20.642: INFO: Found all expected endpoints: [netserver-0] Jan 25 00:00:20.647: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7899 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 00:00:20.648: INFO: >>> kubeConfig: /root/.kube/config I0125 00:00:20.689317 9 log.go:172] (0xc000af4790) (0xc001c4afa0) Create stream I0125 00:00:20.689372 9 log.go:172] (0xc000af4790) (0xc001c4afa0) Stream added, broadcasting: 1 I0125 00:00:20.692118 9 log.go:172] (0xc000af4790) Reply frame received for 1 I0125 00:00:20.692152 9 log.go:172] (0xc000af4790) (0xc001c4b040) Create stream I0125 00:00:20.692163 9 log.go:172] (0xc000af4790) (0xc001c4b040) Stream added, broadcasting: 3 I0125 00:00:20.693461 9 log.go:172] (0xc000af4790) Reply frame received for 3 I0125 00:00:20.693487 9 log.go:172] (0xc000af4790) (0xc002af48c0) Create stream I0125 00:00:20.693495 9 log.go:172] (0xc000af4790) (0xc002af48c0) Stream added, broadcasting: 5 I0125 00:00:20.695433 9 log.go:172] (0xc000af4790) Reply frame received for 5 I0125 00:00:20.762779 9 log.go:172] (0xc000af4790) Data frame received for 3 I0125 00:00:20.762847 9 log.go:172] (0xc001c4b040) (3) Data frame handling I0125 00:00:20.762863 9 log.go:172] (0xc001c4b040) (3) Data frame sent I0125 00:00:20.850766 9 log.go:172] (0xc000af4790) Data frame received for 1 I0125 00:00:20.850824 9 log.go:172] (0xc000af4790) (0xc001c4b040) Stream removed, broadcasting: 3 I0125 00:00:20.850855 9 log.go:172] (0xc001c4afa0) (1) Data frame handling I0125 00:00:20.850868 9 log.go:172] (0xc001c4afa0) (1) Data frame sent I0125 00:00:20.850894 9 log.go:172] (0xc000af4790) (0xc002af48c0) Stream removed, broadcasting: 5 I0125 00:00:20.850909 9 log.go:172] (0xc000af4790) (0xc001c4afa0) Stream removed, broadcasting: 1 I0125 00:00:20.850922 9 log.go:172] (0xc000af4790) Go away received I0125 00:00:20.851389 9 log.go:172] (0xc000af4790) (0xc001c4afa0) Stream removed, broadcasting: 1 I0125 00:00:20.851401 9 log.go:172] (0xc000af4790) (0xc001c4b040) Stream removed, broadcasting: 3 I0125 00:00:20.851405 9 log.go:172] (0xc000af4790) (0xc002af48c0) Stream removed, broadcasting: 5 Jan 25 00:00:20.851: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:00:20.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7899" for this suite. • [SLOW TEST:36.951 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:00:20.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 00:00:21.046: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50" in namespace "projected-2664" to be "success or failure" Jan 25 00:00:21.064: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 17.465822ms Jan 25 00:00:23.070: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024089508s Jan 25 00:00:25.084: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037672515s Jan 25 00:00:27.110: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063541622s Jan 25 00:00:29.116: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06928081s Jan 25 00:00:31.123: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076295834s Jan 25 00:00:33.128: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Pending", Reason="", readiness=false. Elapsed: 12.081689383s Jan 25 00:00:35.135: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.088451887s STEP: Saw pod success Jan 25 00:00:35.135: INFO: Pod "downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50" satisfied condition "success or failure" Jan 25 00:00:35.138: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50 container client-container: STEP: delete the pod Jan 25 00:00:35.172: INFO: Waiting for pod downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50 to disappear Jan 25 00:00:35.181: INFO: Pod downwardapi-volume-0873ba96-a209-46b1-b96a-17f43ce0be50 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:00:35.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2664" for this suite. • [SLOW TEST:14.281 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:00:35.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-7333/secret-test-23b43b32-c1c5-4eb4-a820-a3f31b9352d5 STEP: Creating a pod to test consume secrets Jan 25 00:00:35.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b" in namespace "secrets-7333" to be "success or failure" Jan 25 00:00:35.466: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.076639ms Jan 25 00:00:37.472: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108586354s Jan 25 00:00:39.482: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118741879s Jan 25 00:00:41.495: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131576252s Jan 25 00:00:43.506: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142733951s STEP: Saw pod success Jan 25 00:00:43.506: INFO: Pod "pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b" satisfied condition "success or failure" Jan 25 00:00:43.514: INFO: Trying to get logs from node jerma-node pod pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b container env-test: STEP: delete the pod Jan 25 00:00:43.556: INFO: Waiting for pod pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b to disappear Jan 25 00:00:43.561: INFO: Pod pod-configmaps-23f7fcb9-61ae-477b-860d-c6e8cfa39a1b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:00:43.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7333" for this suite. • [SLOW TEST:8.383 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:00:43.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-fe82e337-c4ab-4058-ab6a-f410676426ce STEP: Creating secret with name s-test-opt-upd-56fae01d-e653-4c00-befc-cd8a686e85f6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fe82e337-c4ab-4058-ab6a-f410676426ce STEP: Updating secret s-test-opt-upd-56fae01d-e653-4c00-befc-cd8a686e85f6 STEP: Creating secret with name s-test-opt-create-90953e1b-78b8-4e1e-b338-cbf6e73edb72 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:00:56.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9720" for this suite. • [SLOW TEST:12.456 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":848,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:00:56.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:01:32.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7633" for this suite. • [SLOW TEST:36.177 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":49,"skipped":871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:01:32.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-8831/configmap-test-d9c62965-0677-41ec-a90e-4ca462032fa1 STEP: Creating a pod to test consume configMaps Jan 25 00:01:32.345: INFO: Waiting up to 5m0s for pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de" in namespace "configmap-8831" to be "success or failure" Jan 25 00:01:32.381: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de": Phase="Pending", Reason="", readiness=false. Elapsed: 36.156205ms Jan 25 00:01:34.391: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045582456s Jan 25 00:01:36.397: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051670057s Jan 25 00:01:38.401: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055669193s Jan 25 00:01:40.409: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063852056s STEP: Saw pod success Jan 25 00:01:40.409: INFO: Pod "pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de" satisfied condition "success or failure" Jan 25 00:01:40.413: INFO: Trying to get logs from node jerma-node pod pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de container env-test: STEP: delete the pod Jan 25 00:01:40.524: INFO: Waiting for pod pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de to disappear Jan 25 00:01:40.531: INFO: Pod pod-configmaps-92065c47-9bf8-4d57-b130-47eed7f318de no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:01:40.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8831" for this suite. • [SLOW TEST:8.335 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":936,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:01:40.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:01:40.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9782" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":51,"skipped":937,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:01:40.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting the proxy server Jan 25 00:01:40.895: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:01:41.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9612" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":52,"skipped":940,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:01:41.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 00:01:41.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540" in namespace "downward-api-3156" to be "success or failure" Jan 25 00:01:41.141: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540": Phase="Pending", Reason="", readiness=false. Elapsed: 5.248334ms Jan 25 00:01:43.149: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013000943s Jan 25 00:01:45.187: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051601045s Jan 25 00:01:47.195: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059189475s Jan 25 00:01:49.252: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116320165s STEP: Saw pod success Jan 25 00:01:49.252: INFO: Pod "downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540" satisfied condition "success or failure" Jan 25 00:01:49.256: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540 container client-container: STEP: delete the pod Jan 25 00:01:49.416: INFO: Waiting for pod downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540 to disappear Jan 25 00:01:49.420: INFO: Pod downwardapi-volume-1949fc3a-fe83-4505-8355-feec8b2d6540 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:01:49.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3156" for this suite. • [SLOW TEST:8.379 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":951,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:01:49.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 25 00:01:49.584: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 00:01:49.602: INFO: Waiting for terminating namespaces to be deleted... Jan 25 00:01:49.605: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 00:01:49.613: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.613: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:01:49.613: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 00:01:49.613: INFO: Container weave ready: true, restart count 1 Jan 25 00:01:49.613: INFO: Container weave-npc ready: true, restart count 0 Jan 25 00:01:49.613: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 00:01:49.642: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.642: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 00:01:49.642: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.642: INFO: Container etcd ready: true, restart count 1 Jan 25 00:01:49.642: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.642: INFO: Container coredns ready: true, restart count 0 Jan 25 00:01:49.642: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.642: INFO: Container coredns ready: true, restart count 0 Jan 25 00:01:49.642: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 00:01:49.642: INFO: Container weave ready: true, restart count 0 Jan 25 00:01:49.642: INFO: Container weave-npc ready: true, restart count 0 Jan 25 00:01:49.643: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.643: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 00:01:49.643: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.643: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:01:49.643: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:01:49.643: INFO: Container kube-scheduler ready: true, restart count 3 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Jan 25 00:01:49.714: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Jan 25 00:01:49.714: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Jan 25 00:01:49.714: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Jan 25 00:01:49.714: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Jan 25 00:01:49.720: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a.15ecf83ce6355365], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1792/filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a.15ecf83de5889f03], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a.15ecf83eac6471bf], Reason = [Created], Message = [Created container filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a] STEP: Considering event: Type = [Normal], Name = [filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a.15ecf83ed6edc2a8], Reason = [Started], Message = [Started container filler-pod-417c7a1e-f459-4b0a-81a3-eebba0c4f50a] STEP: Considering event: Type = [Normal], Name = [filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b.15ecf83ce63127c1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1792/filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b.15ecf83e0bfbe100], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b.15ecf83f05a64a57], Reason = [Created], Message = [Created container filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b] STEP: Considering event: Type = [Normal], Name = [filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b.15ecf83f1f6f2f6c], Reason = [Started], Message = [Started container filler-pod-e5d72160-16c9-4919-ad7d-6181ec63e72b] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ecf83f3c49e973], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:02:00.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1792" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:11.539 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":54,"skipped":958,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:02:00.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-7475 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 25 00:02:01.152: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 25 00:02:37.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-7475 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 00:02:37.427: INFO: >>> kubeConfig: /root/.kube/config I0125 00:02:37.484949 9 log.go:172] (0xc0028d2840) (0xc0023b2a00) Create stream I0125 00:02:37.485020 9 log.go:172] (0xc0028d2840) (0xc0023b2a00) Stream added, broadcasting: 1 I0125 00:02:37.489596 9 log.go:172] (0xc0028d2840) Reply frame received for 1 I0125 00:02:37.489637 9 log.go:172] (0xc0028d2840) (0xc0023b2b40) Create stream I0125 00:02:37.489650 9 log.go:172] (0xc0028d2840) (0xc0023b2b40) Stream added, broadcasting: 3 I0125 00:02:37.491939 9 log.go:172] (0xc0028d2840) Reply frame received for 3 I0125 00:02:37.491979 9 log.go:172] (0xc0028d2840) (0xc00295e500) Create stream I0125 00:02:37.491996 9 log.go:172] (0xc0028d2840) (0xc00295e500) Stream added, broadcasting: 5 I0125 00:02:37.493972 9 log.go:172] (0xc0028d2840) Reply frame received for 5 I0125 00:02:37.590137 9 log.go:172] (0xc0028d2840) Data frame received for 3 I0125 00:02:37.590348 9 log.go:172] (0xc0023b2b40) (3) Data frame handling I0125 00:02:37.590366 9 log.go:172] (0xc0023b2b40) (3) Data frame sent I0125 00:02:37.672233 9 log.go:172] (0xc0028d2840) Data frame received for 1 I0125 00:02:37.672323 9 log.go:172] (0xc0023b2a00) (1) Data frame handling I0125 00:02:37.672350 9 log.go:172] (0xc0023b2a00) (1) Data frame sent I0125 00:02:37.673023 9 log.go:172] (0xc0028d2840) (0xc0023b2b40) Stream removed, broadcasting: 3 I0125 00:02:37.673264 9 log.go:172] (0xc0028d2840) (0xc00295e500) Stream removed, broadcasting: 5 I0125 00:02:37.673329 9 log.go:172] (0xc0028d2840) (0xc0023b2a00) Stream removed, broadcasting: 1 I0125 00:02:37.673605 9 log.go:172] (0xc0028d2840) Go away received I0125 00:02:37.673678 9 log.go:172] (0xc0028d2840) (0xc0023b2a00) Stream removed, broadcasting: 1 I0125 00:02:37.673713 9 log.go:172] (0xc0028d2840) (0xc0023b2b40) Stream removed, broadcasting: 3 I0125 00:02:37.673737 9 log.go:172] (0xc0028d2840) (0xc00295e500) Stream removed, broadcasting: 5 Jan 25 00:02:37.673: INFO: Waiting for responses: map[] Jan 25 00:02:37.679: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.5&port=8081&tries=1'] Namespace:pod-network-test-7475 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 25 00:02:37.679: INFO: >>> kubeConfig: /root/.kube/config I0125 00:02:37.735170 9 log.go:172] (0xc0026d5ce0) (0xc00295ec80) Create stream I0125 00:02:37.735239 9 log.go:172] (0xc0026d5ce0) (0xc00295ec80) Stream added, broadcasting: 1 I0125 00:02:37.739674 9 log.go:172] (0xc0026d5ce0) Reply frame received for 1 I0125 00:02:37.739696 9 log.go:172] (0xc0026d5ce0) (0xc002af4780) Create stream I0125 00:02:37.739705 9 log.go:172] (0xc0026d5ce0) (0xc002af4780) Stream added, broadcasting: 3 I0125 00:02:37.740648 9 log.go:172] (0xc0026d5ce0) Reply frame received for 3 I0125 00:02:37.740669 9 log.go:172] (0xc0026d5ce0) (0xc0020841e0) Create stream I0125 00:02:37.740676 9 log.go:172] (0xc0026d5ce0) (0xc0020841e0) Stream added, broadcasting: 5 I0125 00:02:37.741700 9 log.go:172] (0xc0026d5ce0) Reply frame received for 5 I0125 00:02:37.817588 9 log.go:172] (0xc0026d5ce0) Data frame received for 3 I0125 00:02:37.817700 9 log.go:172] (0xc002af4780) (3) Data frame handling I0125 00:02:37.817725 9 log.go:172] (0xc002af4780) (3) Data frame sent I0125 00:02:37.904065 9 log.go:172] (0xc0026d5ce0) Data frame received for 1 I0125 00:02:37.904259 9 log.go:172] (0xc00295ec80) (1) Data frame handling I0125 00:02:37.904358 9 log.go:172] (0xc00295ec80) (1) Data frame sent I0125 00:02:37.905660 9 log.go:172] (0xc0026d5ce0) (0xc00295ec80) Stream removed, broadcasting: 1 I0125 00:02:37.906351 9 log.go:172] (0xc0026d5ce0) (0xc002af4780) Stream removed, broadcasting: 3 I0125 00:02:37.906922 9 log.go:172] (0xc0026d5ce0) (0xc0020841e0) Stream removed, broadcasting: 5 I0125 00:02:37.906989 9 log.go:172] (0xc0026d5ce0) (0xc00295ec80) Stream removed, broadcasting: 1 I0125 00:02:37.907025 9 log.go:172] (0xc0026d5ce0) (0xc002af4780) Stream removed, broadcasting: 3 I0125 00:02:37.907035 9 log.go:172] (0xc0026d5ce0) (0xc0020841e0) Stream removed, broadcasting: 5 I0125 00:02:37.907104 9 log.go:172] (0xc0026d5ce0) Go away received Jan 25 00:02:37.907: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:02:37.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7475" for this suite. • [SLOW TEST:36.987 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":967,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:02:37.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:02:38.088: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66" in namespace "security-context-test-8240" to be "success or failure" Jan 25 00:02:38.094: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66": Phase="Pending", Reason="", readiness=false. Elapsed: 5.689639ms Jan 25 00:02:40.102: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013602499s Jan 25 00:02:42.128: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039440741s Jan 25 00:02:45.305: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66": Phase="Pending", Reason="", readiness=false. Elapsed: 7.216869904s Jan 25 00:02:47.396: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.307965383s Jan 25 00:02:47.397: INFO: Pod "busybox-readonly-false-89ca6da7-d373-4e4a-8d3b-d0ee7388fd66" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:02:47.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8240" for this suite. • [SLOW TEST:10.191 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:02:48.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1898 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 25 00:02:48.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6589' Jan 25 00:02:49.098: INFO: stderr: "" Jan 25 00:02:49.098: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Jan 25 00:02:59.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6589 -o json' Jan 25 00:02:59.322: INFO: stderr: "" Jan 25 00:02:59.322: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-25T00:02:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6589\",\n \"resourceVersion\": \"4120216\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6589/pods/e2e-test-httpd-pod\",\n \"uid\": \"e7cd5b94-a912-4666-9c35-a3346ccb38a1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4nkwm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4nkwm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4nkwm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T00:02:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T00:02:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T00:02:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-25T00:02:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://7f153f0f9bda3d64b39e09c14805e2f5aec38fe2b4c4f75e8037acf8df75864a\",\n \"image\": \"httpd:2.4.38-alpine\",\n \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-25T00:02:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.2.250\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"podIPs\": [\n {\n \"ip\": \"10.44.0.1\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-25T00:02:49Z\"\n }\n}\n" STEP: replace the image in the pod Jan 25 00:02:59.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6589' Jan 25 00:02:59.701: INFO: stderr: "" Jan 25 00:02:59.701: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1903 Jan 25 00:02:59.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6589' Jan 25 00:03:06.700: INFO: stderr: "" Jan 25 00:03:06.700: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:03:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6589" for this suite. • [SLOW TEST:18.560 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1894 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":57,"skipped":1006,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:03:06.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-dw9c STEP: Creating a pod to test atomic-volume-subpath Jan 25 00:03:06.832: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dw9c" in namespace "subpath-4101" to be "success or failure" Jan 25 00:03:06.855: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.165079ms Jan 25 00:03:08.873: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040303417s Jan 25 00:03:10.879: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046880256s Jan 25 00:03:12.887: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 6.054155476s Jan 25 00:03:14.891: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 8.058923921s Jan 25 00:03:16.897: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 10.064426975s Jan 25 00:03:18.904: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 12.071711188s Jan 25 00:03:20.912: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 14.07968662s Jan 25 00:03:22.919: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 16.086677613s Jan 25 00:03:24.925: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 18.09273458s Jan 25 00:03:26.934: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 20.101198999s Jan 25 00:03:28.941: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 22.10879926s Jan 25 00:03:30.953: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 24.120777846s Jan 25 00:03:32.959: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Running", Reason="", readiness=true. Elapsed: 26.12631575s Jan 25 00:03:34.962: INFO: Pod "pod-subpath-test-configmap-dw9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.129831256s STEP: Saw pod success Jan 25 00:03:34.962: INFO: Pod "pod-subpath-test-configmap-dw9c" satisfied condition "success or failure" Jan 25 00:03:34.964: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-dw9c container test-container-subpath-configmap-dw9c: STEP: delete the pod Jan 25 00:03:35.002: INFO: Waiting for pod pod-subpath-test-configmap-dw9c to disappear Jan 25 00:03:35.018: INFO: Pod pod-subpath-test-configmap-dw9c no longer exists STEP: Deleting pod pod-subpath-test-configmap-dw9c Jan 25 00:03:35.018: INFO: Deleting pod "pod-subpath-test-configmap-dw9c" in namespace "subpath-4101" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:03:35.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4101" for this suite. • [SLOW TEST:28.312 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":58,"skipped":1010,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:03:35.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:03:35.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1001" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":59,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:03:35.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:03:52.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4940" for this suite. • [SLOW TEST:17.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":60,"skipped":1073,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:03:52.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 00:03:53.636: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 00:03:56.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:03:58.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:04:00.146: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507433, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 00:04:03.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:04:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7506" for this suite. STEP: Destroying namespace "webhook-7506-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.720 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":61,"skipped":1084,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:04:03.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 00:04:04.132: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 00:04:06.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:04:08.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:04:10.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:04:12.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507444, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 00:04:15.249: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:04:15.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-479" for this suite. STEP: Destroying namespace "webhook-479-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.826 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":62,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:04:16.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-9459 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9459 to expose endpoints map[] Jan 25 00:04:16.480: INFO: successfully validated that service multi-endpoint-test in namespace services-9459 exposes endpoints map[] (6.196701ms elapsed) STEP: Creating pod pod1 in namespace services-9459 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9459 to expose endpoints map[pod1:[100]] Jan 25 00:04:20.654: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.157942104s elapsed, will retry) Jan 25 00:04:24.717: INFO: successfully validated that service multi-endpoint-test in namespace services-9459 exposes endpoints map[pod1:[100]] (8.220688192s elapsed) STEP: Creating pod pod2 in namespace services-9459 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9459 to expose endpoints map[pod1:[100] pod2:[101]] Jan 25 00:04:29.484: INFO: Unexpected endpoints: found map[3fce37e0-43cf-487f-bd6b-aeb984cb1c93:[100]], expected map[pod1:[100] pod2:[101]] (4.761188747s elapsed, will retry) Jan 25 00:04:31.778: INFO: successfully validated that service multi-endpoint-test in namespace services-9459 exposes endpoints map[pod1:[100] pod2:[101]] (7.056033651s elapsed) STEP: Deleting pod pod1 in namespace services-9459 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9459 to expose endpoints map[pod2:[101]] Jan 25 00:04:31.840: INFO: successfully validated that service multi-endpoint-test in namespace services-9459 exposes endpoints map[pod2:[101]] (31.698611ms elapsed) STEP: Deleting pod pod2 in namespace services-9459 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9459 to expose endpoints map[] Jan 25 00:04:32.927: INFO: successfully validated that service multi-endpoint-test in namespace services-9459 exposes endpoints map[] (1.056795707s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:04:33.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9459" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:16.914 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":63,"skipped":1126,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:04:33.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5814 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5814 I0125 00:04:33.281134 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5814, replica count: 2 I0125 00:04:36.332199 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:04:39.332607 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:04:42.332995 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:04:45.333436 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:04:48.333899 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 00:04:48.334: INFO: Creating new exec pod Jan 25 00:04:57.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5814 execpodkfrw6 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 25 00:04:57.849: INFO: stderr: "I0125 00:04:57.599680 1110 log.go:172] (0xc000510f20) (0xc000a5c000) Create stream\nI0125 00:04:57.600051 1110 log.go:172] (0xc000510f20) (0xc000a5c000) Stream added, broadcasting: 1\nI0125 00:04:57.604919 1110 log.go:172] (0xc000510f20) Reply frame received for 1\nI0125 00:04:57.604958 1110 log.go:172] (0xc000510f20) (0xc00071fb80) Create stream\nI0125 00:04:57.604969 1110 log.go:172] (0xc000510f20) (0xc00071fb80) Stream added, broadcasting: 3\nI0125 00:04:57.607362 1110 log.go:172] (0xc000510f20) Reply frame received for 3\nI0125 00:04:57.607409 1110 log.go:172] (0xc000510f20) (0xc000a5c0a0) Create stream\nI0125 00:04:57.607442 1110 log.go:172] (0xc000510f20) (0xc000a5c0a0) Stream added, broadcasting: 5\nI0125 00:04:57.609497 1110 log.go:172] (0xc000510f20) Reply frame received for 5\nI0125 00:04:57.690186 1110 log.go:172] (0xc000510f20) Data frame received for 5\nI0125 00:04:57.690433 1110 log.go:172] (0xc000a5c0a0) (5) Data frame handling\nI0125 00:04:57.690514 1110 log.go:172] (0xc000a5c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0125 00:04:57.706504 1110 log.go:172] (0xc000510f20) Data frame received for 5\nI0125 00:04:57.706709 1110 log.go:172] (0xc000a5c0a0) (5) Data frame handling\nI0125 00:04:57.706782 1110 log.go:172] (0xc000a5c0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 00:04:57.833611 1110 log.go:172] (0xc000510f20) Data frame received for 1\nI0125 00:04:57.833733 1110 log.go:172] (0xc000510f20) (0xc000a5c0a0) Stream removed, broadcasting: 5\nI0125 00:04:57.833832 1110 log.go:172] (0xc000a5c000) (1) Data frame handling\nI0125 00:04:57.833877 1110 log.go:172] (0xc000a5c000) (1) Data frame sent\nI0125 00:04:57.833909 1110 log.go:172] (0xc000510f20) (0xc00071fb80) Stream removed, broadcasting: 3\nI0125 00:04:57.833937 1110 log.go:172] (0xc000510f20) (0xc000a5c000) Stream removed, broadcasting: 1\nI0125 00:04:57.834007 1110 log.go:172] (0xc000510f20) Go away received\nI0125 00:04:57.834888 1110 log.go:172] (0xc000510f20) (0xc000a5c000) Stream removed, broadcasting: 1\nI0125 00:04:57.834907 1110 log.go:172] (0xc000510f20) (0xc00071fb80) Stream removed, broadcasting: 3\nI0125 00:04:57.834911 1110 log.go:172] (0xc000510f20) (0xc000a5c0a0) Stream removed, broadcasting: 5\n" Jan 25 00:04:57.849: INFO: stdout: "" Jan 25 00:04:57.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5814 execpodkfrw6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.230.187 80' Jan 25 00:04:58.263: INFO: stderr: "I0125 00:04:58.094630 1131 log.go:172] (0xc000a4f340) (0xc000b3aaa0) Create stream\nI0125 00:04:58.094741 1131 log.go:172] (0xc000a4f340) (0xc000b3aaa0) Stream added, broadcasting: 1\nI0125 00:04:58.108830 1131 log.go:172] (0xc000a4f340) Reply frame received for 1\nI0125 00:04:58.109003 1131 log.go:172] (0xc000a4f340) (0xc0006886e0) Create stream\nI0125 00:04:58.109023 1131 log.go:172] (0xc000a4f340) (0xc0006886e0) Stream added, broadcasting: 3\nI0125 00:04:58.111275 1131 log.go:172] (0xc000a4f340) Reply frame received for 3\nI0125 00:04:58.111319 1131 log.go:172] (0xc000a4f340) (0xc00056f360) Create stream\nI0125 00:04:58.111332 1131 log.go:172] (0xc000a4f340) (0xc00056f360) Stream added, broadcasting: 5\nI0125 00:04:58.112767 1131 log.go:172] (0xc000a4f340) Reply frame received for 5\nI0125 00:04:58.187042 1131 log.go:172] (0xc000a4f340) Data frame received for 5\nI0125 00:04:58.187155 1131 log.go:172] (0xc00056f360) (5) Data frame handling\nI0125 00:04:58.187203 1131 log.go:172] (0xc00056f360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.230.187 80\nI0125 00:04:58.190791 1131 log.go:172] (0xc000a4f340) Data frame received for 5\nI0125 00:04:58.190966 1131 log.go:172] (0xc00056f360) (5) Data frame handling\nI0125 00:04:58.191037 1131 log.go:172] (0xc00056f360) (5) Data frame sent\nConnection to 10.96.230.187 80 port [tcp/http] succeeded!\nI0125 00:04:58.253927 1131 log.go:172] (0xc000a4f340) (0xc0006886e0) Stream removed, broadcasting: 3\nI0125 00:04:58.254414 1131 log.go:172] (0xc000a4f340) Data frame received for 1\nI0125 00:04:58.254732 1131 log.go:172] (0xc000a4f340) (0xc00056f360) Stream removed, broadcasting: 5\nI0125 00:04:58.254967 1131 log.go:172] (0xc000b3aaa0) (1) Data frame handling\nI0125 00:04:58.255020 1131 log.go:172] (0xc000b3aaa0) (1) Data frame sent\nI0125 00:04:58.255039 1131 log.go:172] (0xc000a4f340) (0xc000b3aaa0) Stream removed, broadcasting: 1\nI0125 00:04:58.255063 1131 log.go:172] (0xc000a4f340) Go away received\nI0125 00:04:58.256816 1131 log.go:172] (0xc000a4f340) (0xc000b3aaa0) Stream removed, broadcasting: 1\nI0125 00:04:58.256850 1131 log.go:172] (0xc000a4f340) (0xc0006886e0) Stream removed, broadcasting: 3\nI0125 00:04:58.256874 1131 log.go:172] (0xc000a4f340) (0xc00056f360) Stream removed, broadcasting: 5\n" Jan 25 00:04:58.263: INFO: stdout: "" Jan 25 00:04:58.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5814 execpodkfrw6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31379' Jan 25 00:04:58.621: INFO: stderr: "I0125 00:04:58.434810 1152 log.go:172] (0xc000620a50) (0xc00094e280) Create stream\nI0125 00:04:58.435225 1152 log.go:172] (0xc000620a50) (0xc00094e280) Stream added, broadcasting: 1\nI0125 00:04:58.439391 1152 log.go:172] (0xc000620a50) Reply frame received for 1\nI0125 00:04:58.439451 1152 log.go:172] (0xc000620a50) (0xc0006ddb80) Create stream\nI0125 00:04:58.439461 1152 log.go:172] (0xc000620a50) (0xc0006ddb80) Stream added, broadcasting: 3\nI0125 00:04:58.440992 1152 log.go:172] (0xc000620a50) Reply frame received for 3\nI0125 00:04:58.441081 1152 log.go:172] (0xc000620a50) (0xc00094e320) Create stream\nI0125 00:04:58.441091 1152 log.go:172] (0xc000620a50) (0xc00094e320) Stream added, broadcasting: 5\nI0125 00:04:58.443494 1152 log.go:172] (0xc000620a50) Reply frame received for 5\nI0125 00:04:58.519612 1152 log.go:172] (0xc000620a50) Data frame received for 5\nI0125 00:04:58.519772 1152 log.go:172] (0xc00094e320) (5) Data frame handling\nI0125 00:04:58.519807 1152 log.go:172] (0xc00094e320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31379\nI0125 00:04:58.524508 1152 log.go:172] (0xc000620a50) Data frame received for 5\nI0125 00:04:58.524528 1152 log.go:172] (0xc00094e320) (5) Data frame handling\nI0125 00:04:58.524547 1152 log.go:172] (0xc00094e320) (5) Data frame sent\nConnection to 10.96.2.250 31379 port [tcp/31379] succeeded!\nI0125 00:04:58.604571 1152 log.go:172] (0xc000620a50) Data frame received for 1\nI0125 00:04:58.604755 1152 log.go:172] (0xc000620a50) (0xc0006ddb80) Stream removed, broadcasting: 3\nI0125 00:04:58.605003 1152 log.go:172] (0xc00094e280) (1) Data frame handling\nI0125 00:04:58.605301 1152 log.go:172] (0xc00094e280) (1) Data frame sent\nI0125 00:04:58.605376 1152 log.go:172] (0xc000620a50) (0xc00094e320) Stream removed, broadcasting: 5\nI0125 00:04:58.605565 1152 log.go:172] (0xc000620a50) (0xc00094e280) Stream removed, broadcasting: 1\nI0125 00:04:58.605690 1152 log.go:172] (0xc000620a50) Go away received\nI0125 00:04:58.608168 1152 log.go:172] (0xc000620a50) (0xc00094e280) Stream removed, broadcasting: 1\nI0125 00:04:58.608229 1152 log.go:172] (0xc000620a50) (0xc0006ddb80) Stream removed, broadcasting: 3\nI0125 00:04:58.608261 1152 log.go:172] (0xc000620a50) (0xc00094e320) Stream removed, broadcasting: 5\n" Jan 25 00:04:58.621: INFO: stdout: "" Jan 25 00:04:58.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5814 execpodkfrw6 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31379' Jan 25 00:04:58.914: INFO: stderr: "I0125 00:04:58.744920 1175 log.go:172] (0xc0000f53f0) (0xc0006a9e00) Create stream\nI0125 00:04:58.745274 1175 log.go:172] (0xc0000f53f0) (0xc0006a9e00) Stream added, broadcasting: 1\nI0125 00:04:58.748249 1175 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0125 00:04:58.748275 1175 log.go:172] (0xc0000f53f0) (0xc0006a9ea0) Create stream\nI0125 00:04:58.748281 1175 log.go:172] (0xc0000f53f0) (0xc0006a9ea0) Stream added, broadcasting: 3\nI0125 00:04:58.749599 1175 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0125 00:04:58.749654 1175 log.go:172] (0xc0000f53f0) (0xc000932000) Create stream\nI0125 00:04:58.749671 1175 log.go:172] (0xc0000f53f0) (0xc000932000) Stream added, broadcasting: 5\nI0125 00:04:58.752290 1175 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0125 00:04:58.829186 1175 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0125 00:04:58.829282 1175 log.go:172] (0xc000932000) (5) Data frame handling\nI0125 00:04:58.829319 1175 log.go:172] (0xc000932000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31379\nI0125 00:04:58.830880 1175 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0125 00:04:58.830891 1175 log.go:172] (0xc000932000) (5) Data frame handling\nI0125 00:04:58.830901 1175 log.go:172] (0xc000932000) (5) Data frame sent\nConnection to 10.96.1.234 31379 port [tcp/31379] succeeded!\nI0125 00:04:58.902138 1175 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0125 00:04:58.902276 1175 log.go:172] (0xc0006a9e00) (1) Data frame handling\nI0125 00:04:58.902323 1175 log.go:172] (0xc0006a9e00) (1) Data frame sent\nI0125 00:04:58.902402 1175 log.go:172] (0xc0000f53f0) (0xc0006a9e00) Stream removed, broadcasting: 1\nI0125 00:04:58.904117 1175 log.go:172] (0xc0000f53f0) (0xc0006a9ea0) Stream removed, broadcasting: 3\nI0125 00:04:58.905715 1175 log.go:172] (0xc0000f53f0) (0xc000932000) Stream removed, broadcasting: 5\nI0125 00:04:58.905826 1175 log.go:172] (0xc0000f53f0) (0xc0006a9e00) Stream removed, broadcasting: 1\nI0125 00:04:58.905908 1175 log.go:172] (0xc0000f53f0) (0xc0006a9ea0) Stream removed, broadcasting: 3\nI0125 00:04:58.905927 1175 log.go:172] (0xc0000f53f0) (0xc000932000) Stream removed, broadcasting: 5\nI0125 00:04:58.906010 1175 log.go:172] (0xc0000f53f0) Go away received\n" Jan 25 00:04:58.914: INFO: stdout: "" Jan 25 00:04:58.914: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:04:58.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5814" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:25.949 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":64,"skipped":1135,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:04:59.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f Jan 25 00:04:59.104: INFO: Pod name my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f: Found 0 pods out of 1 Jan 25 00:05:04.110: INFO: Pod name my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f: Found 1 pods out of 1 Jan 25 00:05:04.110: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f" are running Jan 25 00:05:08.490: INFO: Pod "my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f-wp65b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:04:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:04:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:04:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:04:59 +0000 UTC Reason: Message:}]) Jan 25 00:05:08.490: INFO: Trying to dial the pod Jan 25 00:05:13.515: INFO: Controller my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f: Got expected result from replica 1 [my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f-wp65b]: "my-hostname-basic-d0f80620-9f0c-4eaa-b501-f5506502aa9f-wp65b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:13.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-694" for this suite. • [SLOW TEST:14.513 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":65,"skipped":1140,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:13.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-f891dd41-13fb-4dd3-8ab3-320a98a0211a STEP: Creating a pod to test consume configMaps Jan 25 00:05:13.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09" in namespace "projected-8595" to be "success or failure" Jan 25 00:05:13.661: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09": Phase="Pending", Reason="", readiness=false. Elapsed: 7.395185ms Jan 25 00:05:15.667: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013254357s Jan 25 00:05:17.672: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018901179s Jan 25 00:05:19.679: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025040836s Jan 25 00:05:21.689: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035628478s STEP: Saw pod success Jan 25 00:05:21.689: INFO: Pod "pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09" satisfied condition "success or failure" Jan 25 00:05:21.699: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09 container projected-configmap-volume-test: STEP: delete the pod Jan 25 00:05:21.821: INFO: Waiting for pod pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09 to disappear Jan 25 00:05:21.829: INFO: Pod pod-projected-configmaps-f9a0f3c1-4a3d-41bd-9f2e-73aca600cd09 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:21.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8595" for this suite. • [SLOW TEST:8.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1144,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:21.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-9945/configmap-test-c3956334-3215-4fab-82e5-149dd1c9e260 STEP: Creating a pod to test consume configMaps Jan 25 00:05:22.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1" in namespace "configmap-9945" to be "success or failure" Jan 25 00:05:22.139: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.777829ms Jan 25 00:05:24.149: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034248839s Jan 25 00:05:26.157: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042198321s Jan 25 00:05:28.165: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050233297s Jan 25 00:05:30.170: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055421422s Jan 25 00:05:32.177: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062707424s STEP: Saw pod success Jan 25 00:05:32.177: INFO: Pod "pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1" satisfied condition "success or failure" Jan 25 00:05:32.183: INFO: Trying to get logs from node jerma-node pod pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1 container env-test: STEP: delete the pod Jan 25 00:05:32.215: INFO: Waiting for pod pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1 to disappear Jan 25 00:05:32.248: INFO: Pod pod-configmaps-24eec7d9-d1f4-4836-9dcc-a7963d1c9eb1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:32.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9945" for this suite. • [SLOW TEST:10.405 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1151,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:32.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-f276e58e-75a5-43ac-a3cf-f3c9324df7a7 STEP: Creating a pod to test consume secrets Jan 25 00:05:32.381: INFO: Waiting up to 5m0s for pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e" in namespace "secrets-2212" to be "success or failure" Jan 25 00:05:32.384: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.058466ms Jan 25 00:05:34.390: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008991628s Jan 25 00:05:36.396: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015017316s Jan 25 00:05:38.401: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019735435s Jan 25 00:05:40.413: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031725226s STEP: Saw pod success Jan 25 00:05:40.413: INFO: Pod "pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e" satisfied condition "success or failure" Jan 25 00:05:40.424: INFO: Trying to get logs from node jerma-node pod pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e container secret-volume-test: STEP: delete the pod Jan 25 00:05:40.499: INFO: Waiting for pod pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e to disappear Jan 25 00:05:40.505: INFO: Pod pod-secrets-eea015c0-1419-4cd1-99d2-0c13300e416e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:40.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2212" for this suite. • [SLOW TEST:8.264 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1171,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:40.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:05:40.652: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:42.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2228" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":69,"skipped":1174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:42.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 25 00:05:42.419: INFO: Waiting up to 5m0s for pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705" in namespace "downward-api-2005" to be "success or failure" Jan 25 00:05:42.443: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705": Phase="Pending", Reason="", readiness=false. Elapsed: 23.460787ms Jan 25 00:05:44.448: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028679592s Jan 25 00:05:46.464: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04450243s Jan 25 00:05:48.473: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053907287s Jan 25 00:05:50.503: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083216291s STEP: Saw pod success Jan 25 00:05:50.503: INFO: Pod "downward-api-2ea8ff29-a810-4385-ad5d-56f037931705" satisfied condition "success or failure" Jan 25 00:05:50.509: INFO: Trying to get logs from node jerma-node pod downward-api-2ea8ff29-a810-4385-ad5d-56f037931705 container dapi-container: STEP: delete the pod Jan 25 00:05:50.975: INFO: Waiting for pod downward-api-2ea8ff29-a810-4385-ad5d-56f037931705 to disappear Jan 25 00:05:50.981: INFO: Pod downward-api-2ea8ff29-a810-4385-ad5d-56f037931705 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:05:50.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2005" for this suite. • [SLOW TEST:8.707 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1203,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:05:50.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-secret-4786 STEP: Creating a pod to test atomic-volume-subpath Jan 25 00:05:51.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4786" in namespace "subpath-6909" to be "success or failure" Jan 25 00:05:51.165: INFO: Pod "pod-subpath-test-secret-4786": Phase="Pending", Reason="", readiness=false. Elapsed: 21.208479ms Jan 25 00:05:53.172: INFO: Pod "pod-subpath-test-secret-4786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027989182s Jan 25 00:05:55.176: INFO: Pod "pod-subpath-test-secret-4786": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031983873s Jan 25 00:05:57.182: INFO: Pod "pod-subpath-test-secret-4786": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03859951s Jan 25 00:05:59.190: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 8.046383298s Jan 25 00:06:01.196: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 10.052010622s Jan 25 00:06:03.200: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 12.056851065s Jan 25 00:06:05.205: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 14.06130771s Jan 25 00:06:07.211: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 16.067490961s Jan 25 00:06:09.217: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 18.073030529s Jan 25 00:06:11.226: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 20.082791144s Jan 25 00:06:13.234: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 22.090369734s Jan 25 00:06:15.241: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 24.097377988s Jan 25 00:06:17.246: INFO: Pod "pod-subpath-test-secret-4786": Phase="Running", Reason="", readiness=true. Elapsed: 26.101953369s Jan 25 00:06:19.252: INFO: Pod "pod-subpath-test-secret-4786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.108695382s STEP: Saw pod success Jan 25 00:06:19.252: INFO: Pod "pod-subpath-test-secret-4786" satisfied condition "success or failure" Jan 25 00:06:19.256: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-4786 container test-container-subpath-secret-4786: STEP: delete the pod Jan 25 00:06:19.286: INFO: Waiting for pod pod-subpath-test-secret-4786 to disappear Jan 25 00:06:19.403: INFO: Pod pod-subpath-test-secret-4786 no longer exists STEP: Deleting pod pod-subpath-test-secret-4786 Jan 25 00:06:19.403: INFO: Deleting pod "pod-subpath-test-secret-4786" in namespace "subpath-6909" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:06:19.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6909" for this suite. • [SLOW TEST:28.433 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":71,"skipped":1214,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:06:19.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:06:19.543: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:06:27.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5623" for this suite. • [SLOW TEST:8.200 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1220,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:06:27.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 25 00:06:27.690: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 00:06:27.702: INFO: Waiting for terminating namespaces to be deleted... Jan 25 00:06:27.705: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 00:06:27.713: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 00:06:27.713: INFO: Container weave ready: true, restart count 1 Jan 25 00:06:27.713: INFO: Container weave-npc ready: true, restart count 0 Jan 25 00:06:27.713: INFO: pod-logs-websocket-fcc0410a-5cdc-446d-a30e-771985b44691 from pods-5623 started at 2020-01-25 00:06:19 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.713: INFO: Container main ready: true, restart count 0 Jan 25 00:06:27.713: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.713: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:06:27.713: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 00:06:27.731: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container kube-scheduler ready: true, restart count 3 Jan 25 00:06:27.731: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 00:06:27.731: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container etcd ready: true, restart count 1 Jan 25 00:06:27.731: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container coredns ready: true, restart count 0 Jan 25 00:06:27.731: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container coredns ready: true, restart count 0 Jan 25 00:06:27.731: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 00:06:27.731: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 00:06:27.731: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:06:27.731: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 00:06:27.731: INFO: Container weave ready: true, restart count 0 Jan 25 00:06:27.731: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4efa63e0-c52c-4191-a7f2-3df5572e2fc5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4efa63e0-c52c-4191-a7f2-3df5572e2fc5 off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-4efa63e0-c52c-4191-a7f2-3df5572e2fc5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:06:44.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9977" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:16.516 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":73,"skipped":1240,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:06:44.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-5a226a81-fdd8-4146-8436-2febacd42bcc STEP: Creating a pod to test consume configMaps Jan 25 00:06:44.211: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9" in namespace "configmap-2649" to be "success or failure" Jan 25 00:06:44.214: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.589091ms Jan 25 00:06:46.219: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007581825s Jan 25 00:06:48.297: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086009163s Jan 25 00:06:50.321: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109923635s Jan 25 00:06:52.339: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12794241s Jan 25 00:06:54.343: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132224182s STEP: Saw pod success Jan 25 00:06:54.343: INFO: Pod "pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9" satisfied condition "success or failure" Jan 25 00:06:54.346: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9 container configmap-volume-test: STEP: delete the pod Jan 25 00:06:54.449: INFO: Waiting for pod pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9 to disappear Jan 25 00:06:54.459: INFO: Pod pod-configmaps-1d115740-7d2b-4512-b961-8bc2a38defc9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:06:54.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2649" for this suite. • [SLOW TEST:10.336 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1257,"failed":0} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:06:54.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 25 00:06:54.693: INFO: Number of nodes with available pods: 0 Jan 25 00:06:54.693: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:06:55.708: INFO: Number of nodes with available pods: 0 Jan 25 00:06:55.708: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:06:57.351: INFO: Number of nodes with available pods: 0 Jan 25 00:06:57.351: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:06:57.771: INFO: Number of nodes with available pods: 0 Jan 25 00:06:57.771: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:06:58.706: INFO: Number of nodes with available pods: 0 Jan 25 00:06:58.706: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:06:59.716: INFO: Number of nodes with available pods: 0 Jan 25 00:06:59.716: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:02.430: INFO: Number of nodes with available pods: 0 Jan 25 00:07:02.430: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:03.664: INFO: Number of nodes with available pods: 0 Jan 25 00:07:03.664: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:04.012: INFO: Number of nodes with available pods: 0 Jan 25 00:07:04.012: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:04.743: INFO: Number of nodes with available pods: 1 Jan 25 00:07:04.743: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Jan 25 00:07:05.706: INFO: Number of nodes with available pods: 2 Jan 25 00:07:05.706: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jan 25 00:07:05.815: INFO: Number of nodes with available pods: 1 Jan 25 00:07:05.815: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:06.835: INFO: Number of nodes with available pods: 1 Jan 25 00:07:06.835: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:07.832: INFO: Number of nodes with available pods: 1 Jan 25 00:07:07.832: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:08.825: INFO: Number of nodes with available pods: 1 Jan 25 00:07:08.825: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:09.832: INFO: Number of nodes with available pods: 1 Jan 25 00:07:09.832: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:10.823: INFO: Number of nodes with available pods: 1 Jan 25 00:07:10.823: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:11.831: INFO: Number of nodes with available pods: 1 Jan 25 00:07:11.831: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:12.903: INFO: Number of nodes with available pods: 1 Jan 25 00:07:12.903: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:13.832: INFO: Number of nodes with available pods: 1 Jan 25 00:07:13.832: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:14.827: INFO: Number of nodes with available pods: 1 Jan 25 00:07:14.827: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:15.838: INFO: Number of nodes with available pods: 1 Jan 25 00:07:15.838: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:16.845: INFO: Number of nodes with available pods: 1 Jan 25 00:07:16.845: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:17.828: INFO: Number of nodes with available pods: 1 Jan 25 00:07:17.828: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:18.830: INFO: Number of nodes with available pods: 1 Jan 25 00:07:18.830: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:19.828: INFO: Number of nodes with available pods: 1 Jan 25 00:07:19.828: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:20.828: INFO: Number of nodes with available pods: 1 Jan 25 00:07:20.828: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:21.834: INFO: Number of nodes with available pods: 1 Jan 25 00:07:21.834: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:22.832: INFO: Number of nodes with available pods: 1 Jan 25 00:07:22.832: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:23.830: INFO: Number of nodes with available pods: 1 Jan 25 00:07:23.830: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:24.828: INFO: Number of nodes with available pods: 1 Jan 25 00:07:24.828: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:25.828: INFO: Number of nodes with available pods: 1 Jan 25 00:07:25.828: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:26.831: INFO: Number of nodes with available pods: 1 Jan 25 00:07:26.831: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:27.834: INFO: Number of nodes with available pods: 1 Jan 25 00:07:27.834: INFO: Node jerma-node is running more than one daemon pod Jan 25 00:07:28.886: INFO: Number of nodes with available pods: 2 Jan 25 00:07:28.886: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8551, will wait for the garbage collector to delete the pods Jan 25 00:07:28.956: INFO: Deleting DaemonSet.extensions daemon-set took: 9.708081ms Jan 25 00:07:29.257: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.017536ms Jan 25 00:07:43.167: INFO: Number of nodes with available pods: 0 Jan 25 00:07:43.168: INFO: Number of running nodes: 0, number of available pods: 0 Jan 25 00:07:43.172: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8551/daemonsets","resourceVersion":"4121531"},"items":null} Jan 25 00:07:43.175: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8551/pods","resourceVersion":"4121531"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:07:43.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8551" for this suite. • [SLOW TEST:48.711 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":75,"skipped":1257,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:07:43.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:07:43.249: INFO: Creating deployment "test-recreate-deployment" Jan 25 00:07:43.299: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 25 00:07:43.319: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 25 00:07:45.331: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 25 00:07:45.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:07:47.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:07:49.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507663, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:07:51.343: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 25 00:07:51.352: INFO: Updating deployment test-recreate-deployment Jan 25 00:07:51.352: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67 Jan 25 00:07:51.696: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-8730 /apis/apps/v1/namespaces/deployment-8730/deployments/test-recreate-deployment 0f474a84-f487-4f08-a7bb-0dbd017f5d8e 4121621 2 2020-01-25 00:07:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027274a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 00:07:51 +0000 UTC,LastTransitionTime:2020-01-25 00:07:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-25 00:07:51 +0000 UTC,LastTransitionTime:2020-01-25 00:07:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 25 00:07:51.748: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-8730 /apis/apps/v1/namespaces/deployment-8730/replicasets/test-recreate-deployment-5f94c574ff c87167ed-8630-4beb-b591-a489fafd0be6 4121620 1 2020-01-25 00:07:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0f474a84-f487-4f08-a7bb-0dbd017f5d8e 0xc005332077 0xc005332078}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053320d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 00:07:51.749: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 25 00:07:51.749: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-8730 /apis/apps/v1/namespaces/deployment-8730/replicasets/test-recreate-deployment-799c574856 55f197e5-20d8-4210-9578-b3d2dc3f8683 4121608 2 2020-01-25 00:07:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0f474a84-f487-4f08-a7bb-0dbd017f5d8e 0xc005332147 0xc005332148}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0053321b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 25 00:07:51.755: INFO: Pod "test-recreate-deployment-5f94c574ff-x4jrr" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-x4jrr test-recreate-deployment-5f94c574ff- deployment-8730 /api/v1/namespaces/deployment-8730/pods/test-recreate-deployment-5f94c574ff-x4jrr 782b64c8-824f-436e-a891-44686b401468 4121618 0 2020-01-25 00:07:51 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c87167ed-8630-4beb-b591-a489fafd0be6 0xc005332617 0xc005332618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hd7zj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hd7zj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hd7zj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:07:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:07:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:07:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:07:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 00:07:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:07:51.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8730" for this suite. • [SLOW TEST:8.575 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":76,"skipped":1271,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:07:51.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 25 00:07:51.967: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 25 00:07:51.992: INFO: Waiting for terminating namespaces to be deleted... Jan 25 00:07:52.003: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 25 00:07:52.071: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 25 00:07:52.071: INFO: Container weave ready: true, restart count 1 Jan 25 00:07:52.071: INFO: Container weave-npc ready: true, restart count 0 Jan 25 00:07:52.071: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.071: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:07:52.071: INFO: test-recreate-deployment-5f94c574ff-x4jrr from deployment-8730 started at 2020-01-25 00:07:51 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.071: INFO: Container httpd ready: false, restart count 0 Jan 25 00:07:52.071: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 25 00:07:52.088: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 25 00:07:52.088: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container kube-proxy ready: true, restart count 0 Jan 25 00:07:52.088: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 25 00:07:52.088: INFO: Container weave ready: true, restart count 0 Jan 25 00:07:52.088: INFO: Container weave-npc ready: true, restart count 0 Jan 25 00:07:52.088: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container kube-scheduler ready: true, restart count 3 Jan 25 00:07:52.088: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container kube-apiserver ready: true, restart count 1 Jan 25 00:07:52.088: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container etcd ready: true, restart count 1 Jan 25 00:07:52.088: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container coredns ready: true, restart count 0 Jan 25 00:07:52.088: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 25 00:07:52.088: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6e42461a-593f-4f36-a91e-ab9ca971270d 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-6e42461a-593f-4f36-a91e-ab9ca971270d off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-6e42461a-593f-4f36-a91e-ab9ca971270d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:08:26.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4043" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:34.882 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":77,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:08:26.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 00:08:26.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea" in namespace "projected-9796" to be "success or failure" Jan 25 00:08:26.803: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Pending", Reason="", readiness=false. Elapsed: 41.478701ms Jan 25 00:08:28.811: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049631863s Jan 25 00:08:30.816: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054625829s Jan 25 00:08:32.820: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058323215s Jan 25 00:08:34.834: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072360947s Jan 25 00:08:36.848: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086820962s STEP: Saw pod success Jan 25 00:08:36.848: INFO: Pod "downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea" satisfied condition "success or failure" Jan 25 00:08:36.874: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea container client-container: STEP: delete the pod Jan 25 00:08:36.939: INFO: Waiting for pod downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea to disappear Jan 25 00:08:36.947: INFO: Pod downwardapi-volume-e8fc95e2-e57d-472b-8795-1b4a26959aea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:08:36.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9796" for this suite. • [SLOW TEST:10.322 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1300,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:08:36.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 25 00:08:37.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b" in namespace "downward-api-3570" to be "success or failure" Jan 25 00:08:37.273: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.07728ms Jan 25 00:08:39.317: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101081847s Jan 25 00:08:41.325: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109231879s Jan 25 00:08:43.332: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115984085s Jan 25 00:08:45.338: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121782598s STEP: Saw pod success Jan 25 00:08:45.338: INFO: Pod "downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b" satisfied condition "success or failure" Jan 25 00:08:45.341: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b container client-container: STEP: delete the pod Jan 25 00:08:45.395: INFO: Waiting for pod downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b to disappear Jan 25 00:08:45.401: INFO: Pod downwardapi-volume-b8935dae-191c-4c34-ad4e-46a803a3c40b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:08:45.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3570" for this suite. • [SLOW TEST:8.481 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1307,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:08:45.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-bd9d3e98-7bff-4843-b526-f7532d0c67b0 STEP: Creating a pod to test consume secrets Jan 25 00:08:45.657: INFO: Waiting up to 5m0s for pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59" in namespace "secrets-6519" to be "success or failure" Jan 25 00:08:45.703: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Pending", Reason="", readiness=false. Elapsed: 45.598544ms Jan 25 00:08:47.712: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054211071s Jan 25 00:08:49.717: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060043511s Jan 25 00:08:51.723: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066049381s Jan 25 00:08:53.730: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072767178s Jan 25 00:08:55.741: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083114716s STEP: Saw pod success Jan 25 00:08:55.741: INFO: Pod "pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59" satisfied condition "success or failure" Jan 25 00:08:55.745: INFO: Trying to get logs from node jerma-node pod pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59 container secret-volume-test: STEP: delete the pod Jan 25 00:08:56.346: INFO: Waiting for pod pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59 to disappear Jan 25 00:08:56.350: INFO: Pod pod-secrets-5ab582aa-674a-4145-8843-9db2e0fb0c59 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:08:56.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6519" for this suite. • [SLOW TEST:10.913 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1311,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:08:56.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:09:05.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7322" for this suite. • [SLOW TEST:9.232 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":81,"skipped":1322,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:09:05.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 25 00:09:05.665: INFO: PodSpec: initContainers in spec.initContainers Jan 25 00:10:02.618: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d64bed40-53d1-4456-a587-7a3c42be5e50", GenerateName:"", Namespace:"init-container-152", SelfLink:"/api/v1/namespaces/init-container-152/pods/pod-init-d64bed40-53d1-4456-a587-7a3c42be5e50", UID:"ddb30a63-e46a-4af0-b1ab-c0e35a0cb832", ResourceVersion:"4122148", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715507745, loc:(*time.Location)(0x7d7cf00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"665106739"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-99frt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bf6e00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99frt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99frt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-99frt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0048b0308), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025c4780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0048b0390)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0048b03b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0048b03b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0048b03bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507745, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507745, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507745, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715507745, loc:(*time.Location)(0x7d7cf00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc0010ff920), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001091110)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001091180)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a07e334114cf29b929621389e222f99072f7a2fb8effcda9f5f5b079e360ee9d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010ff960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0010ff940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0048b043f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:10:02.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-152" for this suite. • [SLOW TEST:57.054 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":82,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:10:02.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5138 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5138 I0125 00:10:02.920147 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5138, replica count: 2 I0125 00:10:05.970874 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:10:08.971573 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:10:11.972018 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 00:10:11.972: INFO: Creating new exec pod Jan 25 00:10:21.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5138 execpod7h6sf -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 25 00:10:23.597: INFO: stderr: "I0125 00:10:23.357831 1195 log.go:172] (0xc0001058c0) (0xc0008b20a0) Create stream\nI0125 00:10:23.358008 1195 log.go:172] (0xc0001058c0) (0xc0008b20a0) Stream added, broadcasting: 1\nI0125 00:10:23.365174 1195 log.go:172] (0xc0001058c0) Reply frame received for 1\nI0125 00:10:23.365272 1195 log.go:172] (0xc0001058c0) (0xc0008b2140) Create stream\nI0125 00:10:23.365294 1195 log.go:172] (0xc0001058c0) (0xc0008b2140) Stream added, broadcasting: 3\nI0125 00:10:23.368440 1195 log.go:172] (0xc0001058c0) Reply frame received for 3\nI0125 00:10:23.368508 1195 log.go:172] (0xc0001058c0) (0xc00063fa40) Create stream\nI0125 00:10:23.368532 1195 log.go:172] (0xc0001058c0) (0xc00063fa40) Stream added, broadcasting: 5\nI0125 00:10:23.372008 1195 log.go:172] (0xc0001058c0) Reply frame received for 5\nI0125 00:10:23.495719 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.495880 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\nI0125 00:10:23.495913 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\nI0125 00:10:23.495925 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.495938 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\n+ nc -zv -t -wI0125 00:10:23.495972 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\nI0125 00:10:23.498870 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.498929 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\nI0125 00:10:23.498945 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\n 2I0125 00:10:23.499835 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.499866 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\nI0125 00:10:23.499881 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\n externalname-service 80I0125 00:10:23.500405 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.500422 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\nI0125 00:10:23.500440 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\n\nI0125 00:10:23.508730 1195 log.go:172] (0xc0001058c0) Data frame received for 5\nI0125 00:10:23.508770 1195 log.go:172] (0xc00063fa40) (5) Data frame handling\nI0125 00:10:23.508789 1195 log.go:172] (0xc00063fa40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0125 00:10:23.582170 1195 log.go:172] (0xc0001058c0) Data frame received for 1\nI0125 00:10:23.582319 1195 log.go:172] (0xc0001058c0) (0xc0008b2140) Stream removed, broadcasting: 3\nI0125 00:10:23.582356 1195 log.go:172] (0xc0008b20a0) (1) Data frame handling\nI0125 00:10:23.582376 1195 log.go:172] (0xc0008b20a0) (1) Data frame sent\nI0125 00:10:23.582405 1195 log.go:172] (0xc0001058c0) (0xc00063fa40) Stream removed, broadcasting: 5\nI0125 00:10:23.582430 1195 log.go:172] (0xc0001058c0) (0xc0008b20a0) Stream removed, broadcasting: 1\nI0125 00:10:23.582446 1195 log.go:172] (0xc0001058c0) Go away received\nI0125 00:10:23.584429 1195 log.go:172] (0xc0001058c0) (0xc0008b20a0) Stream removed, broadcasting: 1\nI0125 00:10:23.584606 1195 log.go:172] (0xc0001058c0) (0xc0008b2140) Stream removed, broadcasting: 3\nI0125 00:10:23.584625 1195 log.go:172] (0xc0001058c0) (0xc00063fa40) Stream removed, broadcasting: 5\n" Jan 25 00:10:23.597: INFO: stdout: "" Jan 25 00:10:23.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5138 execpod7h6sf -- /bin/sh -x -c nc -zv -t -w 2 10.96.246.220 80' Jan 25 00:10:24.089: INFO: stderr: "I0125 00:10:23.861759 1229 log.go:172] (0xc000a84790) (0xc0009ec280) Create stream\nI0125 00:10:23.862242 1229 log.go:172] (0xc000a84790) (0xc0009ec280) Stream added, broadcasting: 1\nI0125 00:10:23.872573 1229 log.go:172] (0xc000a84790) Reply frame received for 1\nI0125 00:10:23.872725 1229 log.go:172] (0xc000a84790) (0xc0009ec320) Create stream\nI0125 00:10:23.872758 1229 log.go:172] (0xc000a84790) (0xc0009ec320) Stream added, broadcasting: 3\nI0125 00:10:23.875761 1229 log.go:172] (0xc000a84790) Reply frame received for 3\nI0125 00:10:23.875845 1229 log.go:172] (0xc000a84790) (0xc00069be00) Create stream\nI0125 00:10:23.875865 1229 log.go:172] (0xc000a84790) (0xc00069be00) Stream added, broadcasting: 5\nI0125 00:10:23.877823 1229 log.go:172] (0xc000a84790) Reply frame received for 5\nI0125 00:10:23.982799 1229 log.go:172] (0xc000a84790) Data frame received for 5\nI0125 00:10:23.982997 1229 log.go:172] (0xc00069be00) (5) Data frame handling\nI0125 00:10:23.983044 1229 log.go:172] (0xc00069be00) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.246.220 80\nI0125 00:10:23.984387 1229 log.go:172] (0xc000a84790) Data frame received for 5\nI0125 00:10:23.984497 1229 log.go:172] (0xc00069be00) (5) Data frame handling\nI0125 00:10:23.984646 1229 log.go:172] (0xc00069be00) (5) Data frame sent\nConnection to 10.96.246.220 80 port [tcp/http] succeeded!\nI0125 00:10:24.066662 1229 log.go:172] (0xc000a84790) Data frame received for 1\nI0125 00:10:24.067098 1229 log.go:172] (0xc000a84790) (0xc00069be00) Stream removed, broadcasting: 5\nI0125 00:10:24.067702 1229 log.go:172] (0xc000a84790) (0xc0009ec320) Stream removed, broadcasting: 3\nI0125 00:10:24.068044 1229 log.go:172] (0xc0009ec280) (1) Data frame handling\nI0125 00:10:24.068199 1229 log.go:172] (0xc0009ec280) (1) Data frame sent\nI0125 00:10:24.068264 1229 log.go:172] (0xc000a84790) (0xc0009ec280) Stream removed, broadcasting: 1\nI0125 00:10:24.068357 1229 log.go:172] (0xc000a84790) Go away received\nI0125 00:10:24.070651 1229 log.go:172] (0xc000a84790) (0xc0009ec280) Stream removed, broadcasting: 1\nI0125 00:10:24.070709 1229 log.go:172] (0xc000a84790) (0xc0009ec320) Stream removed, broadcasting: 3\nI0125 00:10:24.070727 1229 log.go:172] (0xc000a84790) (0xc00069be00) Stream removed, broadcasting: 5\n" Jan 25 00:10:24.089: INFO: stdout: "" Jan 25 00:10:24.089: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:10:24.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5138" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:21.536 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":83,"skipped":1361,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:10:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 25 00:10:24.386: INFO: Waiting up to 5m0s for pod "pod-47df3c0e-a965-4b61-9180-873c6e067343" in namespace "emptydir-824" to be "success or failure" Jan 25 00:10:24.413: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 26.470637ms Jan 25 00:10:26.420: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033971846s Jan 25 00:10:28.427: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040276344s Jan 25 00:10:30.432: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04569114s Jan 25 00:10:33.290: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 8.903797137s Jan 25 00:10:35.297: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Pending", Reason="", readiness=false. Elapsed: 10.910401236s Jan 25 00:10:37.305: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.919050958s STEP: Saw pod success Jan 25 00:10:37.305: INFO: Pod "pod-47df3c0e-a965-4b61-9180-873c6e067343" satisfied condition "success or failure" Jan 25 00:10:37.312: INFO: Trying to get logs from node jerma-node pod pod-47df3c0e-a965-4b61-9180-873c6e067343 container test-container: STEP: delete the pod Jan 25 00:10:37.443: INFO: Waiting for pod pod-47df3c0e-a965-4b61-9180-873c6e067343 to disappear Jan 25 00:10:37.449: INFO: Pod pod-47df3c0e-a965-4b61-9180-873c6e067343 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:10:37.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-824" for this suite. • [SLOW TEST:13.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1361,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:10:37.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7438.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7438.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.221_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7438.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7438.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7438.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7438.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7438.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 221.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.221_udp@PTR;check="$$(dig +tcp +noall +answer +search 221.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.221_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 00:10:47.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.856: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.865: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.902: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.910: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:47.944: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:10:52.957: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:52.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:52.991: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.001: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.038: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.041: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.053: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:53.098: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:10:57.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:57.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:57.971: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:57.976: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:58.042: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:58.045: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:58.048: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:58.051: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:10:58.067: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:11:02.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:02.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:02.991: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:02.994: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:03.018: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:03.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:03.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:03.034: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:03.082: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:11:07.956: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:07.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:07.969: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:07.975: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:08.025: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:08.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:08.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:08.037: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:08.066: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:11:12.955: INFO: Unable to read wheezy_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:12.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:12.971: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:12.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:13.034: INFO: Unable to read jessie_udp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:13.039: INFO: Unable to read jessie_tcp@dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:13.042: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:13.063: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local from pod dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6: the server could not find the requested resource (get pods dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6) Jan 25 00:11:13.101: INFO: Lookups using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 failed for: [wheezy_udp@dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@dns-test-service.dns-7438.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_udp@dns-test-service.dns-7438.svc.cluster.local jessie_tcp@dns-test-service.dns-7438.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7438.svc.cluster.local] Jan 25 00:11:18.037: INFO: DNS probes using dns-7438/dns-test-e7cdecc9-e1a9-4625-9b1b-010a6439a8a6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:11:18.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7438" for this suite. • [SLOW TEST:40.850 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":85,"skipped":1373,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:11:18.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:11:29.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6840" for this suite. • [SLOW TEST:11.291 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":86,"skipped":1374,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:11:29.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 25 00:11:29.741: INFO: Waiting up to 5m0s for pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc" in namespace "emptydir-7079" to be "success or failure" Jan 25 00:11:29.748: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.197146ms Jan 25 00:11:31.753: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011818669s Jan 25 00:11:33.769: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027559332s Jan 25 00:11:35.781: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040067599s Jan 25 00:11:38.469: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.727915278s STEP: Saw pod success Jan 25 00:11:38.469: INFO: Pod "pod-91e2bab6-b498-463c-842e-5f28b34d74dc" satisfied condition "success or failure" Jan 25 00:11:38.475: INFO: Trying to get logs from node jerma-node pod pod-91e2bab6-b498-463c-842e-5f28b34d74dc container test-container: STEP: delete the pod Jan 25 00:11:38.630: INFO: Waiting for pod pod-91e2bab6-b498-463c-842e-5f28b34d74dc to disappear Jan 25 00:11:38.639: INFO: Pod pod-91e2bab6-b498-463c-842e-5f28b34d74dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:11:38.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7079" for this suite. • [SLOW TEST:9.053 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1380,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:11:38.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:11:38.795: INFO: Creating ReplicaSet my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f Jan 25 00:11:38.812: INFO: Pod name my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f: Found 0 pods out of 1 Jan 25 00:11:43.816: INFO: Pod name my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f: Found 1 pods out of 1 Jan 25 00:11:43.816: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f" is running Jan 25 00:11:45.829: INFO: Pod "my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f-mnl9z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:11:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:11:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:11:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-25 00:11:38 +0000 UTC Reason: Message:}]) Jan 25 00:11:45.829: INFO: Trying to dial the pod Jan 25 00:11:50.889: INFO: Controller my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f: Got expected result from replica 1 [my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f-mnl9z]: "my-hostname-basic-bf9b62d0-8141-43a2-953f-bb6ec4e9160f-mnl9z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:11:50.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5349" for this suite. • [SLOW TEST:12.271 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":88,"skipped":1411,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:11:50.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-3d365a8b-e39f-406e-92c2-9ed2ebace756 STEP: Creating configMap with name cm-test-opt-upd-ab52d615-a2ad-4245-82d9-ee1f27249f8d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-3d365a8b-e39f-406e-92c2-9ed2ebace756 STEP: Updating configmap cm-test-opt-upd-ab52d615-a2ad-4245-82d9-ee1f27249f8d STEP: Creating configMap with name cm-test-opt-create-c660d3d5-2bef-4a48-89d6-b98e06a160cd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:13:26.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9605" for this suite. • [SLOW TEST:95.404 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1417,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:13:26.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 25 00:13:42.631: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:42.641: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:44.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:44.860: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:46.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:46.647: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:48.642: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:48.651: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:50.642: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:50.658: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:52.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:52.648: INFO: Pod pod-with-poststart-exec-hook still exists Jan 25 00:13:54.641: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 25 00:13:54.649: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:13:54.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3116" for this suite. • [SLOW TEST:28.336 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1420,"failed":0} S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:13:54.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:14:04.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3000" for this suite. • [SLOW TEST:10.397 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":91,"skipped":1421,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:14:05.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-3bed4db7-a542-4f5d-9ea2-9f43c4a02dd6 STEP: Creating a pod to test consume configMaps Jan 25 00:14:05.231: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624" in namespace "projected-1705" to be "success or failure" Jan 25 00:14:05.246: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Pending", Reason="", readiness=false. Elapsed: 14.660489ms Jan 25 00:14:07.252: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02069301s Jan 25 00:14:09.260: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028725649s Jan 25 00:14:11.266: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034900408s Jan 25 00:14:13.273: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04189771s Jan 25 00:14:15.279: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048041624s STEP: Saw pod success Jan 25 00:14:15.279: INFO: Pod "pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624" satisfied condition "success or failure" Jan 25 00:14:15.284: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624 container projected-configmap-volume-test: STEP: delete the pod Jan 25 00:14:15.328: INFO: Waiting for pod pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624 to disappear Jan 25 00:14:15.333: INFO: Pod pod-projected-configmaps-80e45c14-cfaa-4d1d-9436-28ec18d6f624 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:14:15.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1705" for this suite. • [SLOW TEST:10.282 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:14:15.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-e55f177f-f4eb-4b85-b871-8e7c8f8c22ab STEP: Creating a pod to test consume secrets Jan 25 00:14:15.553: INFO: Waiting up to 5m0s for pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332" in namespace "secrets-9684" to be "success or failure" Jan 25 00:14:15.589: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332": Phase="Pending", Reason="", readiness=false. Elapsed: 35.226426ms Jan 25 00:14:17.596: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043072649s Jan 25 00:14:19.603: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049664671s Jan 25 00:14:21.614: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060186912s Jan 25 00:14:23.623: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069663656s STEP: Saw pod success Jan 25 00:14:23.623: INFO: Pod "pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332" satisfied condition "success or failure" Jan 25 00:14:23.629: INFO: Trying to get logs from node jerma-node pod pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332 container secret-volume-test: STEP: delete the pod Jan 25 00:14:23.842: INFO: Waiting for pod pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332 to disappear Jan 25 00:14:23.919: INFO: Pod pod-secrets-4805e0b0-6e82-4365-bc5d-7ab26050e332 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:14:23.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9684" for this suite. • [SLOW TEST:8.581 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1480,"failed":0} [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:14:23.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-097b5184-0cf1-465d-8a80-90222001f1c4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-097b5184-0cf1-465d-8a80-90222001f1c4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:15:55.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7837" for this suite. • [SLOW TEST:91.493 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1480,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:15:55.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:15:55.541: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7261 I0125 00:15:55.566427 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7261, replica count: 1 I0125 00:15:56.617167 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:15:57.617513 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:15:58.618252 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:15:59.619130 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:16:00.619981 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:16:01.620477 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0125 00:16:02.620988 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 25 00:16:02.753: INFO: Created: latency-svc-bf5b7 Jan 25 00:16:02.813: INFO: Got endpoints: latency-svc-bf5b7 [91.606958ms] Jan 25 00:16:02.844: INFO: Created: latency-svc-pgknn Jan 25 00:16:02.892: INFO: Created: latency-svc-mq5x7 Jan 25 00:16:02.894: INFO: Got endpoints: latency-svc-pgknn [80.892563ms] Jan 25 00:16:02.902: INFO: Got endpoints: latency-svc-mq5x7 [88.03055ms] Jan 25 00:16:02.976: INFO: Created: latency-svc-hg7n8 Jan 25 00:16:02.986: INFO: Got endpoints: latency-svc-hg7n8 [172.401756ms] Jan 25 00:16:03.017: INFO: Created: latency-svc-m2dpp Jan 25 00:16:03.029: INFO: Got endpoints: latency-svc-m2dpp [213.382506ms] Jan 25 00:16:03.066: INFO: Created: latency-svc-8sdjm Jan 25 00:16:03.176: INFO: Got endpoints: latency-svc-8sdjm [362.299336ms] Jan 25 00:16:03.192: INFO: Created: latency-svc-g4hhl Jan 25 00:16:03.203: INFO: Got endpoints: latency-svc-g4hhl [387.385716ms] Jan 25 00:16:03.226: INFO: Created: latency-svc-dwg9l Jan 25 00:16:03.240: INFO: Got endpoints: latency-svc-dwg9l [424.449341ms] Jan 25 00:16:03.263: INFO: Created: latency-svc-gv9lv Jan 25 00:16:03.337: INFO: Got endpoints: latency-svc-gv9lv [521.282504ms] Jan 25 00:16:03.356: INFO: Created: latency-svc-96zjz Jan 25 00:16:03.368: INFO: Got endpoints: latency-svc-96zjz [552.525833ms] Jan 25 00:16:03.436: INFO: Created: latency-svc-5xc9m Jan 25 00:16:03.510: INFO: Got endpoints: latency-svc-5xc9m [694.328128ms] Jan 25 00:16:03.538: INFO: Created: latency-svc-nwjms Jan 25 00:16:03.550: INFO: Got endpoints: latency-svc-nwjms [735.077981ms] Jan 25 00:16:03.720: INFO: Created: latency-svc-z4l9j Jan 25 00:16:03.757: INFO: Got endpoints: latency-svc-z4l9j [943.024527ms] Jan 25 00:16:03.976: INFO: Created: latency-svc-p9rgg Jan 25 00:16:03.991: INFO: Got endpoints: latency-svc-p9rgg [1.17567886s] Jan 25 00:16:04.048: INFO: Created: latency-svc-mqpqd Jan 25 00:16:04.060: INFO: Got endpoints: latency-svc-mqpqd [1.244834354s] Jan 25 00:16:04.305: INFO: Created: latency-svc-hswvl Jan 25 00:16:04.347: INFO: Got endpoints: latency-svc-hswvl [1.531763726s] Jan 25 00:16:04.389: INFO: Created: latency-svc-tn9pt Jan 25 00:16:04.504: INFO: Got endpoints: latency-svc-tn9pt [1.610364201s] Jan 25 00:16:04.519: INFO: Created: latency-svc-v5rr5 Jan 25 00:16:04.521: INFO: Got endpoints: latency-svc-v5rr5 [1.618540751s] Jan 25 00:16:04.579: INFO: Created: latency-svc-kb5cz Jan 25 00:16:04.588: INFO: Got endpoints: latency-svc-kb5cz [1.602303291s] Jan 25 00:16:04.670: INFO: Created: latency-svc-t2cn4 Jan 25 00:16:04.676: INFO: Got endpoints: latency-svc-t2cn4 [1.647035303s] Jan 25 00:16:04.725: INFO: Created: latency-svc-nrq9l Jan 25 00:16:04.734: INFO: Got endpoints: latency-svc-nrq9l [1.557720511s] Jan 25 00:16:04.764: INFO: Created: latency-svc-4gkxh Jan 25 00:16:04.831: INFO: Got endpoints: latency-svc-4gkxh [1.628179804s] Jan 25 00:16:04.862: INFO: Created: latency-svc-9krn2 Jan 25 00:16:04.907: INFO: Got endpoints: latency-svc-9krn2 [1.666629682s] Jan 25 00:16:04.993: INFO: Created: latency-svc-wtv2p Jan 25 00:16:05.024: INFO: Got endpoints: latency-svc-wtv2p [1.687593561s] Jan 25 00:16:05.030: INFO: Created: latency-svc-9h29k Jan 25 00:16:05.035: INFO: Got endpoints: latency-svc-9h29k [1.667401098s] Jan 25 00:16:05.057: INFO: Created: latency-svc-vqrpk Jan 25 00:16:05.063: INFO: Got endpoints: latency-svc-vqrpk [1.553439212s] Jan 25 00:16:05.086: INFO: Created: latency-svc-zzdx2 Jan 25 00:16:05.173: INFO: Got endpoints: latency-svc-zzdx2 [1.622382525s] Jan 25 00:16:05.181: INFO: Created: latency-svc-d8rhn Jan 25 00:16:05.189: INFO: Got endpoints: latency-svc-d8rhn [1.431319214s] Jan 25 00:16:05.215: INFO: Created: latency-svc-mwk4c Jan 25 00:16:05.241: INFO: Created: latency-svc-gttw6 Jan 25 00:16:05.243: INFO: Got endpoints: latency-svc-mwk4c [1.251927952s] Jan 25 00:16:05.265: INFO: Got endpoints: latency-svc-gttw6 [1.204339151s] Jan 25 00:16:05.352: INFO: Created: latency-svc-6vxgk Jan 25 00:16:05.388: INFO: Got endpoints: latency-svc-6vxgk [1.040122744s] Jan 25 00:16:05.391: INFO: Created: latency-svc-62ltm Jan 25 00:16:05.398: INFO: Got endpoints: latency-svc-62ltm [893.00926ms] Jan 25 00:16:05.536: INFO: Created: latency-svc-95k7c Jan 25 00:16:05.551: INFO: Got endpoints: latency-svc-95k7c [1.029996314s] Jan 25 00:16:05.554: INFO: Created: latency-svc-dhptk Jan 25 00:16:05.576: INFO: Got endpoints: latency-svc-dhptk [987.080809ms] Jan 25 00:16:05.599: INFO: Created: latency-svc-btt99 Jan 25 00:16:05.611: INFO: Got endpoints: latency-svc-btt99 [935.234637ms] Jan 25 00:16:05.632: INFO: Created: latency-svc-bdbsn Jan 25 00:16:05.814: INFO: Got endpoints: latency-svc-bdbsn [1.079769535s] Jan 25 00:16:05.828: INFO: Created: latency-svc-q8c65 Jan 25 00:16:05.847: INFO: Got endpoints: latency-svc-q8c65 [1.016360588s] Jan 25 00:16:05.887: INFO: Created: latency-svc-6hc8x Jan 25 00:16:05.898: INFO: Got endpoints: latency-svc-6hc8x [991.060551ms] Jan 25 00:16:05.972: INFO: Created: latency-svc-vwj2c Jan 25 00:16:05.980: INFO: Got endpoints: latency-svc-vwj2c [955.121946ms] Jan 25 00:16:06.002: INFO: Created: latency-svc-7hxmt Jan 25 00:16:06.015: INFO: Got endpoints: latency-svc-7hxmt [979.60216ms] Jan 25 00:16:06.049: INFO: Created: latency-svc-xtgrq Jan 25 00:16:06.166: INFO: Got endpoints: latency-svc-xtgrq [1.102542897s] Jan 25 00:16:06.172: INFO: Created: latency-svc-rvvkc Jan 25 00:16:06.184: INFO: Got endpoints: latency-svc-rvvkc [1.010399756s] Jan 25 00:16:06.204: INFO: Created: latency-svc-rhhs2 Jan 25 00:16:06.212: INFO: Got endpoints: latency-svc-rhhs2 [1.023089466s] Jan 25 00:16:06.340: INFO: Created: latency-svc-q7tdr Jan 25 00:16:06.378: INFO: Created: latency-svc-94c4q Jan 25 00:16:06.381: INFO: Got endpoints: latency-svc-q7tdr [1.137601232s] Jan 25 00:16:06.400: INFO: Got endpoints: latency-svc-94c4q [1.135181867s] Jan 25 00:16:06.416: INFO: Created: latency-svc-ck4pv Jan 25 00:16:06.418: INFO: Got endpoints: latency-svc-ck4pv [1.029938659s] Jan 25 00:16:06.503: INFO: Created: latency-svc-947pk Jan 25 00:16:06.513: INFO: Got endpoints: latency-svc-947pk [1.115211319s] Jan 25 00:16:06.557: INFO: Created: latency-svc-br7tl Jan 25 00:16:06.568: INFO: Got endpoints: latency-svc-br7tl [1.016699567s] Jan 25 00:16:06.587: INFO: Created: latency-svc-92gjw Jan 25 00:16:06.605: INFO: Got endpoints: latency-svc-92gjw [1.02946733s] Jan 25 00:16:06.653: INFO: Created: latency-svc-9tq7x Jan 25 00:16:06.663: INFO: Got endpoints: latency-svc-9tq7x [1.051466486s] Jan 25 00:16:06.674: INFO: Created: latency-svc-7k9fs Jan 25 00:16:06.680: INFO: Got endpoints: latency-svc-7k9fs [866.734031ms] Jan 25 00:16:06.709: INFO: Created: latency-svc-gvxrg Jan 25 00:16:06.716: INFO: Got endpoints: latency-svc-gvxrg [868.68584ms] Jan 25 00:16:06.748: INFO: Created: latency-svc-pkmfc Jan 25 00:16:06.814: INFO: Got endpoints: latency-svc-pkmfc [915.599405ms] Jan 25 00:16:06.830: INFO: Created: latency-svc-vfrds Jan 25 00:16:06.845: INFO: Got endpoints: latency-svc-vfrds [864.996961ms] Jan 25 00:16:06.883: INFO: Created: latency-svc-4hwnv Jan 25 00:16:06.886: INFO: Got endpoints: latency-svc-4hwnv [870.962287ms] Jan 25 00:16:07.026: INFO: Created: latency-svc-jg4xv Jan 25 00:16:07.030: INFO: Got endpoints: latency-svc-jg4xv [863.396043ms] Jan 25 00:16:07.075: INFO: Created: latency-svc-ljwn8 Jan 25 00:16:07.075: INFO: Got endpoints: latency-svc-ljwn8 [891.329948ms] Jan 25 00:16:07.112: INFO: Created: latency-svc-pf9tc Jan 25 00:16:07.119: INFO: Got endpoints: latency-svc-pf9tc [906.667815ms] Jan 25 00:16:07.193: INFO: Created: latency-svc-5x72s Jan 25 00:16:07.202: INFO: Got endpoints: latency-svc-5x72s [821.123999ms] Jan 25 00:16:07.229: INFO: Created: latency-svc-dblvs Jan 25 00:16:07.250: INFO: Got endpoints: latency-svc-dblvs [850.354064ms] Jan 25 00:16:07.364: INFO: Created: latency-svc-g68ql Jan 25 00:16:07.367: INFO: Got endpoints: latency-svc-g68ql [949.748294ms] Jan 25 00:16:07.402: INFO: Created: latency-svc-8kwh9 Jan 25 00:16:07.411: INFO: Got endpoints: latency-svc-8kwh9 [897.866974ms] Jan 25 00:16:07.430: INFO: Created: latency-svc-wk85l Jan 25 00:16:07.558: INFO: Got endpoints: latency-svc-wk85l [989.385847ms] Jan 25 00:16:07.584: INFO: Created: latency-svc-r97w5 Jan 25 00:16:07.593: INFO: Got endpoints: latency-svc-r97w5 [987.102838ms] Jan 25 00:16:07.645: INFO: Created: latency-svc-g767q Jan 25 00:16:07.742: INFO: Got endpoints: latency-svc-g767q [1.079245877s] Jan 25 00:16:07.747: INFO: Created: latency-svc-7lld9 Jan 25 00:16:07.779: INFO: Got endpoints: latency-svc-7lld9 [1.098579513s] Jan 25 00:16:07.819: INFO: Created: latency-svc-mv2rb Jan 25 00:16:07.984: INFO: Created: latency-svc-sjlm7 Jan 25 00:16:07.985: INFO: Got endpoints: latency-svc-mv2rb [1.268104834s] Jan 25 00:16:08.000: INFO: Got endpoints: latency-svc-sjlm7 [1.185932573s] Jan 25 00:16:08.054: INFO: Created: latency-svc-4gb2q Jan 25 00:16:08.079: INFO: Got endpoints: latency-svc-4gb2q [1.233788767s] Jan 25 00:16:08.197: INFO: Created: latency-svc-xbtxb Jan 25 00:16:08.205: INFO: Got endpoints: latency-svc-xbtxb [1.319318274s] Jan 25 00:16:08.250: INFO: Created: latency-svc-r24vh Jan 25 00:16:08.278: INFO: Got endpoints: latency-svc-r24vh [1.248596059s] Jan 25 00:16:08.371: INFO: Created: latency-svc-wv5qw Jan 25 00:16:08.386: INFO: Got endpoints: latency-svc-wv5qw [1.310841504s] Jan 25 00:16:08.403: INFO: Created: latency-svc-zpwn9 Jan 25 00:16:08.413: INFO: Got endpoints: latency-svc-zpwn9 [1.294183387s] Jan 25 00:16:08.518: INFO: Created: latency-svc-4w8fh Jan 25 00:16:08.559: INFO: Got endpoints: latency-svc-4w8fh [1.356260678s] Jan 25 00:16:08.567: INFO: Created: latency-svc-rl8p6 Jan 25 00:16:08.570: INFO: Got endpoints: latency-svc-rl8p6 [1.319591394s] Jan 25 00:16:08.600: INFO: Created: latency-svc-5srvt Jan 25 00:16:08.610: INFO: Got endpoints: latency-svc-5srvt [1.242601647s] Jan 25 00:16:08.716: INFO: Created: latency-svc-75687 Jan 25 00:16:08.724: INFO: Got endpoints: latency-svc-75687 [1.313166522s] Jan 25 00:16:08.745: INFO: Created: latency-svc-zgb4x Jan 25 00:16:08.748: INFO: Got endpoints: latency-svc-zgb4x [1.18984732s] Jan 25 00:16:08.826: INFO: Created: latency-svc-t2tvm Jan 25 00:16:08.868: INFO: Created: latency-svc-lqt5k Jan 25 00:16:08.870: INFO: Got endpoints: latency-svc-t2tvm [1.277329002s] Jan 25 00:16:08.898: INFO: Got endpoints: latency-svc-lqt5k [1.155570094s] Jan 25 00:16:08.932: INFO: Created: latency-svc-69rxw Jan 25 00:16:08.999: INFO: Got endpoints: latency-svc-69rxw [1.21980998s] Jan 25 00:16:09.017: INFO: Created: latency-svc-l4l48 Jan 25 00:16:09.024: INFO: Got endpoints: latency-svc-l4l48 [1.039363084s] Jan 25 00:16:09.066: INFO: Created: latency-svc-kx9hn Jan 25 00:16:09.093: INFO: Got endpoints: latency-svc-kx9hn [1.092988357s] Jan 25 00:16:09.174: INFO: Created: latency-svc-m4rbg Jan 25 00:16:09.175: INFO: Got endpoints: latency-svc-m4rbg [1.096252161s] Jan 25 00:16:09.217: INFO: Created: latency-svc-zkvvr Jan 25 00:16:09.222: INFO: Got endpoints: latency-svc-zkvvr [1.015963598s] Jan 25 00:16:09.402: INFO: Created: latency-svc-f92g5 Jan 25 00:16:09.434: INFO: Got endpoints: latency-svc-f92g5 [1.155891475s] Jan 25 00:16:09.561: INFO: Created: latency-svc-rq4rv Jan 25 00:16:09.606: INFO: Got endpoints: latency-svc-rq4rv [1.220468222s] Jan 25 00:16:09.608: INFO: Created: latency-svc-vpknt Jan 25 00:16:09.643: INFO: Got endpoints: latency-svc-vpknt [1.229904332s] Jan 25 00:16:09.780: INFO: Created: latency-svc-c74d6 Jan 25 00:16:09.797: INFO: Got endpoints: latency-svc-c74d6 [1.238239557s] Jan 25 00:16:09.858: INFO: Created: latency-svc-vbgvv Jan 25 00:16:09.873: INFO: Got endpoints: latency-svc-vbgvv [1.303208586s] Jan 25 00:16:09.997: INFO: Created: latency-svc-g4pqx Jan 25 00:16:10.005: INFO: Got endpoints: latency-svc-g4pqx [1.395159638s] Jan 25 00:16:10.055: INFO: Created: latency-svc-s2xx7 Jan 25 00:16:10.062: INFO: Got endpoints: latency-svc-s2xx7 [1.337495397s] Jan 25 00:16:10.193: INFO: Created: latency-svc-wbw7m Jan 25 00:16:10.197: INFO: Got endpoints: latency-svc-wbw7m [1.449259777s] Jan 25 00:16:10.243: INFO: Created: latency-svc-g7lw4 Jan 25 00:16:10.247: INFO: Got endpoints: latency-svc-g7lw4 [1.376850624s] Jan 25 00:16:10.276: INFO: Created: latency-svc-vr797 Jan 25 00:16:10.280: INFO: Got endpoints: latency-svc-vr797 [1.38271482s] Jan 25 00:16:10.372: INFO: Created: latency-svc-9nrkk Jan 25 00:16:10.374: INFO: Got endpoints: latency-svc-9nrkk [1.374608705s] Jan 25 00:16:10.417: INFO: Created: latency-svc-vbj9d Jan 25 00:16:10.597: INFO: Got endpoints: latency-svc-vbj9d [1.573188426s] Jan 25 00:16:10.600: INFO: Created: latency-svc-t7x7w Jan 25 00:16:10.636: INFO: Got endpoints: latency-svc-t7x7w [1.542652648s] Jan 25 00:16:10.665: INFO: Created: latency-svc-vxflp Jan 25 00:16:10.755: INFO: Created: latency-svc-ggkmq Jan 25 00:16:10.758: INFO: Got endpoints: latency-svc-vxflp [1.582841185s] Jan 25 00:16:10.765: INFO: Got endpoints: latency-svc-ggkmq [1.543796341s] Jan 25 00:16:10.788: INFO: Created: latency-svc-6q2w6 Jan 25 00:16:10.797: INFO: Got endpoints: latency-svc-6q2w6 [1.363041162s] Jan 25 00:16:10.812: INFO: Created: latency-svc-f4fqw Jan 25 00:16:10.829: INFO: Created: latency-svc-pgpvn Jan 25 00:16:10.829: INFO: Got endpoints: latency-svc-f4fqw [1.222294063s] Jan 25 00:16:10.839: INFO: Got endpoints: latency-svc-pgpvn [1.196273031s] Jan 25 00:16:10.932: INFO: Created: latency-svc-dz7jk Jan 25 00:16:10.945: INFO: Got endpoints: latency-svc-dz7jk [1.147758654s] Jan 25 00:16:10.986: INFO: Created: latency-svc-dkq5d Jan 25 00:16:10.989: INFO: Got endpoints: latency-svc-dkq5d [1.115003815s] Jan 25 00:16:11.019: INFO: Created: latency-svc-rzrns Jan 25 00:16:11.020: INFO: Got endpoints: latency-svc-rzrns [1.014167846s] Jan 25 00:16:11.146: INFO: Created: latency-svc-fblbk Jan 25 00:16:11.155: INFO: Got endpoints: latency-svc-fblbk [1.093023328s] Jan 25 00:16:11.179: INFO: Created: latency-svc-jvw77 Jan 25 00:16:11.185: INFO: Got endpoints: latency-svc-jvw77 [988.100281ms] Jan 25 00:16:11.207: INFO: Created: latency-svc-tbvnn Jan 25 00:16:11.217: INFO: Got endpoints: latency-svc-tbvnn [970.090876ms] Jan 25 00:16:11.301: INFO: Created: latency-svc-mx2wb Jan 25 00:16:11.331: INFO: Created: latency-svc-ktbr8 Jan 25 00:16:11.332: INFO: Got endpoints: latency-svc-mx2wb [1.050952827s] Jan 25 00:16:11.361: INFO: Got endpoints: latency-svc-ktbr8 [986.836575ms] Jan 25 00:16:11.375: INFO: Created: latency-svc-vdncd Jan 25 00:16:11.472: INFO: Got endpoints: latency-svc-vdncd [874.545157ms] Jan 25 00:16:11.521: INFO: Created: latency-svc-prk22 Jan 25 00:16:11.669: INFO: Created: latency-svc-cj2fz Jan 25 00:16:11.672: INFO: Got endpoints: latency-svc-prk22 [1.036493263s] Jan 25 00:16:11.693: INFO: Got endpoints: latency-svc-cj2fz [934.453582ms] Jan 25 00:16:11.695: INFO: Created: latency-svc-rjlm2 Jan 25 00:16:11.702: INFO: Got endpoints: latency-svc-rjlm2 [936.436127ms] Jan 25 00:16:11.764: INFO: Created: latency-svc-k7gf2 Jan 25 00:16:11.857: INFO: Got endpoints: latency-svc-k7gf2 [1.059430935s] Jan 25 00:16:11.868: INFO: Created: latency-svc-qlrrz Jan 25 00:16:11.871: INFO: Got endpoints: latency-svc-qlrrz [1.042050425s] Jan 25 00:16:11.923: INFO: Created: latency-svc-52p9n Jan 25 00:16:11.933: INFO: Got endpoints: latency-svc-52p9n [1.094067134s] Jan 25 00:16:11.949: INFO: Created: latency-svc-jq7ll Jan 25 00:16:12.028: INFO: Got endpoints: latency-svc-jq7ll [1.08297001s] Jan 25 00:16:12.068: INFO: Created: latency-svc-2n5sr Jan 25 00:16:12.068: INFO: Created: latency-svc-x6n7t Jan 25 00:16:12.070: INFO: Got endpoints: latency-svc-2n5sr [1.08072817s] Jan 25 00:16:12.072: INFO: Got endpoints: latency-svc-x6n7t [1.052000711s] Jan 25 00:16:12.223: INFO: Created: latency-svc-5fvwt Jan 25 00:16:12.232: INFO: Created: latency-svc-rbdhq Jan 25 00:16:12.242: INFO: Got endpoints: latency-svc-5fvwt [1.086952972s] Jan 25 00:16:12.245: INFO: Got endpoints: latency-svc-rbdhq [1.059900669s] Jan 25 00:16:12.264: INFO: Created: latency-svc-j6f6b Jan 25 00:16:12.273: INFO: Got endpoints: latency-svc-j6f6b [1.055525022s] Jan 25 00:16:12.299: INFO: Created: latency-svc-2pbxq Jan 25 00:16:12.314: INFO: Got endpoints: latency-svc-2pbxq [982.578387ms] Jan 25 00:16:12.415: INFO: Created: latency-svc-v6tfj Jan 25 00:16:12.459: INFO: Got endpoints: latency-svc-v6tfj [1.098479646s] Jan 25 00:16:12.462: INFO: Created: latency-svc-68kfj Jan 25 00:16:12.476: INFO: Got endpoints: latency-svc-68kfj [1.00356061s] Jan 25 00:16:12.504: INFO: Created: latency-svc-69n4f Jan 25 00:16:12.586: INFO: Got endpoints: latency-svc-69n4f [913.282921ms] Jan 25 00:16:12.593: INFO: Created: latency-svc-fmjgx Jan 25 00:16:12.602: INFO: Got endpoints: latency-svc-fmjgx [908.904933ms] Jan 25 00:16:12.635: INFO: Created: latency-svc-kwnpl Jan 25 00:16:12.639: INFO: Got endpoints: latency-svc-kwnpl [936.99819ms] Jan 25 00:16:12.662: INFO: Created: latency-svc-rp7b8 Jan 25 00:16:12.675: INFO: Got endpoints: latency-svc-rp7b8 [818.291914ms] Jan 25 00:16:12.680: INFO: Created: latency-svc-n57sr Jan 25 00:16:12.682: INFO: Got endpoints: latency-svc-n57sr [811.184581ms] Jan 25 00:16:12.742: INFO: Created: latency-svc-2zlkk Jan 25 00:16:12.751: INFO: Got endpoints: latency-svc-2zlkk [817.964116ms] Jan 25 00:16:12.773: INFO: Created: latency-svc-wt7p2 Jan 25 00:16:12.798: INFO: Got endpoints: latency-svc-wt7p2 [769.769805ms] Jan 25 00:16:12.799: INFO: Created: latency-svc-qzj6l Jan 25 00:16:12.816: INFO: Got endpoints: latency-svc-qzj6l [746.610466ms] Jan 25 00:16:12.944: INFO: Created: latency-svc-gdwvm Jan 25 00:16:12.995: INFO: Got endpoints: latency-svc-gdwvm [923.840885ms] Jan 25 00:16:13.001: INFO: Created: latency-svc-m9wjf Jan 25 00:16:13.018: INFO: Got endpoints: latency-svc-m9wjf [775.961539ms] Jan 25 00:16:13.138: INFO: Created: latency-svc-x9qwn Jan 25 00:16:13.192: INFO: Got endpoints: latency-svc-x9qwn [946.439643ms] Jan 25 00:16:13.192: INFO: Created: latency-svc-xqrmk Jan 25 00:16:13.199: INFO: Got endpoints: latency-svc-xqrmk [926.427487ms] Jan 25 00:16:13.231: INFO: Created: latency-svc-gtfnq Jan 25 00:16:13.285: INFO: Got endpoints: latency-svc-gtfnq [970.520023ms] Jan 25 00:16:13.288: INFO: Created: latency-svc-xwwdn Jan 25 00:16:13.324: INFO: Got endpoints: latency-svc-xwwdn [863.971971ms] Jan 25 00:16:13.375: INFO: Created: latency-svc-m5wwm Jan 25 00:16:13.471: INFO: Got endpoints: latency-svc-m5wwm [995.344025ms] Jan 25 00:16:13.479: INFO: Created: latency-svc-4shsj Jan 25 00:16:13.480: INFO: Got endpoints: latency-svc-4shsj [894.620252ms] Jan 25 00:16:13.524: INFO: Created: latency-svc-664d8 Jan 25 00:16:13.528: INFO: Got endpoints: latency-svc-664d8 [926.150707ms] Jan 25 00:16:13.566: INFO: Created: latency-svc-jd4p6 Jan 25 00:16:13.665: INFO: Got endpoints: latency-svc-jd4p6 [1.02575508s] Jan 25 00:16:13.676: INFO: Created: latency-svc-xqh6b Jan 25 00:16:13.676: INFO: Got endpoints: latency-svc-xqh6b [1.000765634s] Jan 25 00:16:13.713: INFO: Created: latency-svc-p7dlt Jan 25 00:16:13.721: INFO: Got endpoints: latency-svc-p7dlt [1.039056565s] Jan 25 00:16:13.801: INFO: Created: latency-svc-qh45v Jan 25 00:16:13.805: INFO: Got endpoints: latency-svc-qh45v [1.053506975s] Jan 25 00:16:13.881: INFO: Created: latency-svc-jcltn Jan 25 00:16:13.925: INFO: Created: latency-svc-kj4sh Jan 25 00:16:13.925: INFO: Got endpoints: latency-svc-jcltn [1.126869889s] Jan 25 00:16:13.941: INFO: Got endpoints: latency-svc-kj4sh [1.124420072s] Jan 25 00:16:13.961: INFO: Created: latency-svc-nn9dj Jan 25 00:16:13.996: INFO: Got endpoints: latency-svc-nn9dj [999.978147ms] Jan 25 00:16:13.997: INFO: Created: latency-svc-zg67p Jan 25 00:16:14.005: INFO: Got endpoints: latency-svc-zg67p [987.071205ms] Jan 25 00:16:14.068: INFO: Created: latency-svc-zmfp6 Jan 25 00:16:14.079: INFO: Got endpoints: latency-svc-zmfp6 [887.486483ms] Jan 25 00:16:14.115: INFO: Created: latency-svc-58wlz Jan 25 00:16:14.126: INFO: Got endpoints: latency-svc-58wlz [926.737361ms] Jan 25 00:16:14.156: INFO: Created: latency-svc-sjg2b Jan 25 00:16:14.163: INFO: Got endpoints: latency-svc-sjg2b [877.862706ms] Jan 25 00:16:14.219: INFO: Created: latency-svc-9v72k Jan 25 00:16:14.246: INFO: Got endpoints: latency-svc-9v72k [922.425553ms] Jan 25 00:16:14.271: INFO: Created: latency-svc-549k9 Jan 25 00:16:14.290: INFO: Got endpoints: latency-svc-549k9 [818.665941ms] Jan 25 00:16:14.294: INFO: Created: latency-svc-gdcxz Jan 25 00:16:14.301: INFO: Got endpoints: latency-svc-gdcxz [820.333404ms] Jan 25 00:16:14.372: INFO: Created: latency-svc-s5jns Jan 25 00:16:14.396: INFO: Got endpoints: latency-svc-s5jns [868.232343ms] Jan 25 00:16:14.428: INFO: Created: latency-svc-v8k9v Jan 25 00:16:14.448: INFO: Got endpoints: latency-svc-v8k9v [783.037172ms] Jan 25 00:16:14.570: INFO: Created: latency-svc-fr45c Jan 25 00:16:14.571: INFO: Got endpoints: latency-svc-fr45c [894.425455ms] Jan 25 00:16:14.617: INFO: Created: latency-svc-qq8c7 Jan 25 00:16:14.651: INFO: Got endpoints: latency-svc-qq8c7 [929.943032ms] Jan 25 00:16:14.696: INFO: Created: latency-svc-b8wlz Jan 25 00:16:14.729: INFO: Created: latency-svc-76t82 Jan 25 00:16:14.729: INFO: Got endpoints: latency-svc-b8wlz [923.994918ms] Jan 25 00:16:14.744: INFO: Got endpoints: latency-svc-76t82 [818.345292ms] Jan 25 00:16:14.788: INFO: Created: latency-svc-8qpmg Jan 25 00:16:14.898: INFO: Got endpoints: latency-svc-8qpmg [956.94788ms] Jan 25 00:16:14.919: INFO: Created: latency-svc-zn5s8 Jan 25 00:16:14.927: INFO: Got endpoints: latency-svc-zn5s8 [930.754156ms] Jan 25 00:16:15.083: INFO: Created: latency-svc-m4tbs Jan 25 00:16:15.122: INFO: Got endpoints: latency-svc-m4tbs [1.115919886s] Jan 25 00:16:15.150: INFO: Created: latency-svc-sfvcr Jan 25 00:16:15.154: INFO: Got endpoints: latency-svc-sfvcr [1.074662087s] Jan 25 00:16:15.273: INFO: Created: latency-svc-x2d6h Jan 25 00:16:15.310: INFO: Got endpoints: latency-svc-x2d6h [1.183195703s] Jan 25 00:16:15.318: INFO: Created: latency-svc-tn5qn Jan 25 00:16:15.318: INFO: Got endpoints: latency-svc-tn5qn [1.155163434s] Jan 25 00:16:15.367: INFO: Created: latency-svc-zfkjb Jan 25 00:16:15.468: INFO: Got endpoints: latency-svc-zfkjb [1.221328497s] Jan 25 00:16:15.486: INFO: Created: latency-svc-gcz29 Jan 25 00:16:15.505: INFO: Got endpoints: latency-svc-gcz29 [1.213828096s] Jan 25 00:16:15.538: INFO: Created: latency-svc-4m6dl Jan 25 00:16:15.549: INFO: Got endpoints: latency-svc-4m6dl [1.248163938s] Jan 25 00:16:15.617: INFO: Created: latency-svc-7mqhx Jan 25 00:16:15.621: INFO: Got endpoints: latency-svc-7mqhx [1.22489974s] Jan 25 00:16:15.695: INFO: Created: latency-svc-b4hdm Jan 25 00:16:15.695: INFO: Got endpoints: latency-svc-b4hdm [1.246801897s] Jan 25 00:16:15.720: INFO: Created: latency-svc-xf6ls Jan 25 00:16:15.778: INFO: Got endpoints: latency-svc-xf6ls [1.207344653s] Jan 25 00:16:15.794: INFO: Created: latency-svc-gh6wg Jan 25 00:16:15.801: INFO: Got endpoints: latency-svc-gh6wg [1.149065355s] Jan 25 00:16:15.819: INFO: Created: latency-svc-xk6f2 Jan 25 00:16:15.840: INFO: Got endpoints: latency-svc-xk6f2 [1.111135526s] Jan 25 00:16:15.866: INFO: Created: latency-svc-5gcwg Jan 25 00:16:15.877: INFO: Got endpoints: latency-svc-5gcwg [76.410586ms] Jan 25 00:16:15.951: INFO: Created: latency-svc-vf65k Jan 25 00:16:15.955: INFO: Got endpoints: latency-svc-vf65k [1.210970554s] Jan 25 00:16:16.001: INFO: Created: latency-svc-mpvfm Jan 25 00:16:16.022: INFO: Got endpoints: latency-svc-mpvfm [1.124476883s] Jan 25 00:16:16.023: INFO: Created: latency-svc-lc7b2 Jan 25 00:16:16.041: INFO: Got endpoints: latency-svc-lc7b2 [1.113541807s] Jan 25 00:16:16.089: INFO: Created: latency-svc-7vb56 Jan 25 00:16:16.095: INFO: Got endpoints: latency-svc-7vb56 [973.18628ms] Jan 25 00:16:16.142: INFO: Created: latency-svc-t8nbr Jan 25 00:16:16.177: INFO: Got endpoints: latency-svc-t8nbr [1.023137806s] Jan 25 00:16:16.179: INFO: Created: latency-svc-znxck Jan 25 00:16:16.233: INFO: Got endpoints: latency-svc-znxck [922.933346ms] Jan 25 00:16:16.271: INFO: Created: latency-svc-xhvfl Jan 25 00:16:16.279: INFO: Got endpoints: latency-svc-xhvfl [960.576859ms] Jan 25 00:16:16.305: INFO: Created: latency-svc-sjnjv Jan 25 00:16:16.372: INFO: Got endpoints: latency-svc-sjnjv [904.015365ms] Jan 25 00:16:16.374: INFO: Created: latency-svc-vkczp Jan 25 00:16:16.417: INFO: Got endpoints: latency-svc-vkczp [912.630923ms] Jan 25 00:16:16.419: INFO: Created: latency-svc-krbkw Jan 25 00:16:16.427: INFO: Got endpoints: latency-svc-krbkw [877.7533ms] Jan 25 00:16:16.462: INFO: Created: latency-svc-p8kxb Jan 25 00:16:16.536: INFO: Got endpoints: latency-svc-p8kxb [914.62289ms] Jan 25 00:16:16.558: INFO: Created: latency-svc-cs2bs Jan 25 00:16:16.575: INFO: Got endpoints: latency-svc-cs2bs [880.070741ms] Jan 25 00:16:16.607: INFO: Created: latency-svc-8vtlv Jan 25 00:16:16.622: INFO: Got endpoints: latency-svc-8vtlv [843.768684ms] Jan 25 00:16:16.685: INFO: Created: latency-svc-9j675 Jan 25 00:16:16.708: INFO: Got endpoints: latency-svc-9j675 [867.034836ms] Jan 25 00:16:16.708: INFO: Created: latency-svc-9l8zw Jan 25 00:16:16.742: INFO: Got endpoints: latency-svc-9l8zw [864.388966ms] Jan 25 00:16:16.743: INFO: Created: latency-svc-t557q Jan 25 00:16:16.751: INFO: Got endpoints: latency-svc-t557q [795.878453ms] Jan 25 00:16:16.775: INFO: Created: latency-svc-smzn9 Jan 25 00:16:16.870: INFO: Got endpoints: latency-svc-smzn9 [847.401985ms] Jan 25 00:16:16.887: INFO: Created: latency-svc-2pwqr Jan 25 00:16:16.896: INFO: Got endpoints: latency-svc-2pwqr [854.772294ms] Jan 25 00:16:16.933: INFO: Created: latency-svc-zpr8v Jan 25 00:16:16.945: INFO: Got endpoints: latency-svc-zpr8v [849.799949ms] Jan 25 00:16:16.967: INFO: Created: latency-svc-kcj68 Jan 25 00:16:17.012: INFO: Got endpoints: latency-svc-kcj68 [834.395329ms] Jan 25 00:16:17.031: INFO: Created: latency-svc-wzd5q Jan 25 00:16:17.040: INFO: Got endpoints: latency-svc-wzd5q [807.04241ms] Jan 25 00:16:17.063: INFO: Created: latency-svc-mdjqh Jan 25 00:16:17.221: INFO: Got endpoints: latency-svc-mdjqh [942.51054ms] Jan 25 00:16:17.222: INFO: Latencies: [76.410586ms 80.892563ms 88.03055ms 172.401756ms 213.382506ms 362.299336ms 387.385716ms 424.449341ms 521.282504ms 552.525833ms 694.328128ms 735.077981ms 746.610466ms 769.769805ms 775.961539ms 783.037172ms 795.878453ms 807.04241ms 811.184581ms 817.964116ms 818.291914ms 818.345292ms 818.665941ms 820.333404ms 821.123999ms 834.395329ms 843.768684ms 847.401985ms 849.799949ms 850.354064ms 854.772294ms 863.396043ms 863.971971ms 864.388966ms 864.996961ms 866.734031ms 867.034836ms 868.232343ms 868.68584ms 870.962287ms 874.545157ms 877.7533ms 877.862706ms 880.070741ms 887.486483ms 891.329948ms 893.00926ms 894.425455ms 894.620252ms 897.866974ms 904.015365ms 906.667815ms 908.904933ms 912.630923ms 913.282921ms 914.62289ms 915.599405ms 922.425553ms 922.933346ms 923.840885ms 923.994918ms 926.150707ms 926.427487ms 926.737361ms 929.943032ms 930.754156ms 934.453582ms 935.234637ms 936.436127ms 936.99819ms 942.51054ms 943.024527ms 946.439643ms 949.748294ms 955.121946ms 956.94788ms 960.576859ms 970.090876ms 970.520023ms 973.18628ms 979.60216ms 982.578387ms 986.836575ms 987.071205ms 987.080809ms 987.102838ms 988.100281ms 989.385847ms 991.060551ms 995.344025ms 999.978147ms 1.000765634s 1.00356061s 1.010399756s 1.014167846s 1.015963598s 1.016360588s 1.016699567s 1.023089466s 1.023137806s 1.02575508s 1.02946733s 1.029938659s 1.029996314s 1.036493263s 1.039056565s 1.039363084s 1.040122744s 1.042050425s 1.050952827s 1.051466486s 1.052000711s 1.053506975s 1.055525022s 1.059430935s 1.059900669s 1.074662087s 1.079245877s 1.079769535s 1.08072817s 1.08297001s 1.086952972s 1.092988357s 1.093023328s 1.094067134s 1.096252161s 1.098479646s 1.098579513s 1.102542897s 1.111135526s 1.113541807s 1.115003815s 1.115211319s 1.115919886s 1.124420072s 1.124476883s 1.126869889s 1.135181867s 1.137601232s 1.147758654s 1.149065355s 1.155163434s 1.155570094s 1.155891475s 1.17567886s 1.183195703s 1.185932573s 1.18984732s 1.196273031s 1.204339151s 1.207344653s 1.210970554s 1.213828096s 1.21980998s 1.220468222s 1.221328497s 1.222294063s 1.22489974s 1.229904332s 1.233788767s 1.238239557s 1.242601647s 1.244834354s 1.246801897s 1.248163938s 1.248596059s 1.251927952s 1.268104834s 1.277329002s 1.294183387s 1.303208586s 1.310841504s 1.313166522s 1.319318274s 1.319591394s 1.337495397s 1.356260678s 1.363041162s 1.374608705s 1.376850624s 1.38271482s 1.395159638s 1.431319214s 1.449259777s 1.531763726s 1.542652648s 1.543796341s 1.553439212s 1.557720511s 1.573188426s 1.582841185s 1.602303291s 1.610364201s 1.618540751s 1.622382525s 1.628179804s 1.647035303s 1.666629682s 1.667401098s 1.687593561s] Jan 25 00:16:17.222: INFO: 50 %ile: 1.02575508s Jan 25 00:16:17.222: INFO: 90 %ile: 1.38271482s Jan 25 00:16:17.222: INFO: 99 %ile: 1.667401098s Jan 25 00:16:17.222: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:16:17.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7261" for this suite. • [SLOW TEST:21.865 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":95,"skipped":1489,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:16:17.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:16:25.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3876" for this suite. • [SLOW TEST:8.245 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1489,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:16:25.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-dfdd39e3-53c7-4456-9258-14d891178bd8 STEP: Creating a pod to test consume configMaps Jan 25 00:16:25.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae" in namespace "configmap-7549" to be "success or failure" Jan 25 00:16:25.929: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 51.167877ms Jan 25 00:16:28.055: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177337553s Jan 25 00:16:30.060: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181644936s Jan 25 00:16:32.238: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359769174s Jan 25 00:16:34.258: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379697046s Jan 25 00:16:36.298: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.419807764s Jan 25 00:16:38.303: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.42539178s STEP: Saw pod success Jan 25 00:16:38.303: INFO: Pod "pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae" satisfied condition "success or failure" Jan 25 00:16:38.307: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae container configmap-volume-test: STEP: delete the pod Jan 25 00:16:38.562: INFO: Waiting for pod pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae to disappear Jan 25 00:16:38.568: INFO: Pod pod-configmaps-7a7b4c39-5117-45be-a057-d30e306993ae no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:16:38.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7549" for this suite. • [SLOW TEST:13.133 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:16:38.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75 Jan 25 00:16:38.939: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the sample API server. Jan 25 00:16:40.046: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 25 00:16:42.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508199, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:16:44.765: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508199, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:16:46.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508199, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:16:48.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508199, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:16:50.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508200, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508199, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:16:54.061: INFO: Waited 864.406451ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:16:55.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-790" for this suite. • [SLOW TEST:16.558 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":98,"skipped":1515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:16:55.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 25 00:17:05.636: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:17:05.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9885" for this suite. • [SLOW TEST:10.556 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1544,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:17:05.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-2752b118-9737-4459-b29e-01871d32682d in namespace container-probe-6536 Jan 25 00:17:14.228: INFO: Started pod liveness-2752b118-9737-4459-b29e-01871d32682d in namespace container-probe-6536 STEP: checking the pod's current state and verifying that restartCount is present Jan 25 00:17:14.231: INFO: Initial restart count of pod liveness-2752b118-9737-4459-b29e-01871d32682d is 0 Jan 25 00:17:34.317: INFO: Restart count of pod container-probe-6536/liveness-2752b118-9737-4459-b29e-01871d32682d is now 1 (20.086680028s elapsed) Jan 25 00:17:54.388: INFO: Restart count of pod container-probe-6536/liveness-2752b118-9737-4459-b29e-01871d32682d is now 2 (40.157512598s elapsed) Jan 25 00:18:14.543: INFO: Restart count of pod container-probe-6536/liveness-2752b118-9737-4459-b29e-01871d32682d is now 3 (1m0.311868431s elapsed) Jan 25 00:18:34.635: INFO: Restart count of pod container-probe-6536/liveness-2752b118-9737-4459-b29e-01871d32682d is now 4 (1m20.404495426s elapsed) Jan 25 00:19:42.922: INFO: Restart count of pod container-probe-6536/liveness-2752b118-9737-4459-b29e-01871d32682d is now 5 (2m28.691625552s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:19:43.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6536" for this suite. • [SLOW TEST:157.268 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1550,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:19:43.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 25 00:19:57.776: INFO: Successfully updated pod "adopt-release-stjmr" STEP: Checking that the Job readopts the Pod Jan 25 00:19:57.777: INFO: Waiting up to 15m0s for pod "adopt-release-stjmr" in namespace "job-6014" to be "adopted" Jan 25 00:19:57.788: INFO: Pod "adopt-release-stjmr": Phase="Running", Reason="", readiness=true. Elapsed: 11.741668ms Jan 25 00:19:59.802: INFO: Pod "adopt-release-stjmr": Phase="Running", Reason="", readiness=true. Elapsed: 2.025013978s Jan 25 00:19:59.802: INFO: Pod "adopt-release-stjmr" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 25 00:20:00.320: INFO: Successfully updated pod "adopt-release-stjmr" STEP: Checking that the Job releases the Pod Jan 25 00:20:00.320: INFO: Waiting up to 15m0s for pod "adopt-release-stjmr" in namespace "job-6014" to be "released" Jan 25 00:20:00.334: INFO: Pod "adopt-release-stjmr": Phase="Running", Reason="", readiness=true. Elapsed: 14.368593ms Jan 25 00:20:02.340: INFO: Pod "adopt-release-stjmr": Phase="Running", Reason="", readiness=true. Elapsed: 2.020467743s Jan 25 00:20:02.340: INFO: Pod "adopt-release-stjmr" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:02.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6014" for this suite. • [SLOW TEST:19.297 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":101,"skipped":1558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:02.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 25 00:20:02.969: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 25 00:20:04.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:20:06.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:20:08.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 25 00:20:11.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508403, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 25 00:20:14.036: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:20:14.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-287-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:15.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5446" for this suite. STEP: Destroying namespace "webhook-5446-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:13.304 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":102,"skipped":1589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:15.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:22.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6268" for this suite. STEP: Destroying namespace "nsdeletetest-1761" for this suite. Jan 25 00:20:22.358: INFO: Namespace nsdeletetest-1761 was already deleted STEP: Destroying namespace "nsdeletetest-4474" for this suite. • [SLOW TEST:6.703 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":103,"skipped":1640,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:22.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:20:22.444: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 25 00:20:25.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-650 create -f -' Jan 25 00:20:27.935: INFO: stderr: "" Jan 25 00:20:27.935: INFO: stdout: "e2e-test-crd-publish-openapi-4269-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 25 00:20:27.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-650 delete e2e-test-crd-publish-openapi-4269-crds test-cr' Jan 25 00:20:28.107: INFO: stderr: "" Jan 25 00:20:28.107: INFO: stdout: "e2e-test-crd-publish-openapi-4269-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 25 00:20:28.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-650 apply -f -' Jan 25 00:20:28.430: INFO: stderr: "" Jan 25 00:20:28.430: INFO: stdout: "e2e-test-crd-publish-openapi-4269-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 25 00:20:28.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-650 delete e2e-test-crd-publish-openapi-4269-crds test-cr' Jan 25 00:20:28.567: INFO: stderr: "" Jan 25 00:20:28.567: INFO: stdout: "e2e-test-crd-publish-openapi-4269-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jan 25 00:20:28.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4269-crds' Jan 25 00:20:28.798: INFO: stderr: "" Jan 25 00:20:28.798: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4269-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:30.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-650" for this suite. • [SLOW TEST:8.605 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":104,"skipped":1643,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:30.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:31.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6077" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":105,"skipped":1648,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:31.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-865.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 25 00:20:49.361: INFO: DNS probes using dns-865/dns-test-3e558c17-0b5c-4252-becc-07caf658567c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:20:49.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-865" for this suite. • [SLOW TEST:18.315 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":106,"skipped":1670,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:20:49.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-6e1c58c1-90bf-4d3b-a897-444ec1bd72fc STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 25 00:21:01.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5951" for this suite. • [SLOW TEST:12.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1673,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 25 00:21:01.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 25 00:21:02.096: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
apt/
... (200; 15.707856ms)
Jan 25 00:21:02.106: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 9.579133ms)
Jan 25 00:21:02.115: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 8.985234ms)
Jan 25 00:21:02.120: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.917942ms)
Jan 25 00:21:02.127: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.955879ms)
Jan 25 00:21:02.133: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.944139ms)
Jan 25 00:21:02.139: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.995638ms)
Jan 25 00:21:02.145: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.981298ms)
Jan 25 00:21:02.151: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.4244ms)
Jan 25 00:21:02.157: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.72449ms)
Jan 25 00:21:02.168: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 10.957406ms)
Jan 25 00:21:02.193: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 24.657154ms)
Jan 25 00:21:02.200: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 7.613288ms)
Jan 25 00:21:02.206: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.671846ms)
Jan 25 00:21:02.211: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.811937ms)
Jan 25 00:21:02.215: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.308001ms)
Jan 25 00:21:02.223: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 7.406548ms)
Jan 25 00:21:02.231: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 7.681426ms)
Jan 25 00:21:02.235: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.240153ms)
Jan 25 00:21:02.239: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.619661ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:21:02.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-707" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":108,"skipped":1681,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:21:02.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9rh9g in namespace proxy-8930
I0125 00:21:02.394282       9 runners.go:189] Created replication controller with name: proxy-service-9rh9g, namespace: proxy-8930, replica count: 1
I0125 00:21:03.445070       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:04.445359       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:05.445816       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:06.446152       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:07.446486       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:08.446790       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:21:09.447106       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0125 00:21:10.447513       9 runners.go:189] proxy-service-9rh9g Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 00:21:10.454: INFO: setup took 8.083468174s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 25 00:21:10.488: INFO: (0) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 32.702472ms)
Jan 25 00:21:10.488: INFO: (0) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 33.204064ms)
Jan 25 00:21:10.488: INFO: (0) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 33.188033ms)
Jan 25 00:21:10.488: INFO: (0) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 33.425654ms)
Jan 25 00:21:10.488: INFO: (0) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 33.224646ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 33.582606ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 33.241153ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 33.295208ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 33.513541ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 33.90691ms)
Jan 25 00:21:10.489: INFO: (0) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 33.572968ms)
Jan 25 00:21:10.494: INFO: (0) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 39.180996ms)
Jan 25 00:21:10.495: INFO: (0) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 39.279441ms)
Jan 25 00:21:10.495: INFO: (0) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 39.61143ms)
Jan 25 00:21:10.495: INFO: (0) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 40.111708ms)
Jan 25 00:21:10.495: INFO: (0) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 6.35803ms)
Jan 25 00:21:10.502: INFO: (1) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 6.769824ms)
Jan 25 00:21:10.502: INFO: (1) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 6.447001ms)
Jan 25 00:21:10.513: INFO: (1) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 17.514749ms)
Jan 25 00:21:10.513: INFO: (1) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 17.636899ms)
Jan 25 00:21:10.515: INFO: (1) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 19.554134ms)
Jan 25 00:21:10.518: INFO: (1) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 22.698999ms)
Jan 25 00:21:10.518: INFO: (1) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 24.161341ms)
Jan 25 00:21:10.520: INFO: (1) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 24.337817ms)
Jan 25 00:21:10.534: INFO: (2) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 13.261904ms)
Jan 25 00:21:10.536: INFO: (2) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 14.422422ms)
Jan 25 00:21:10.536: INFO: (2) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 14.504221ms)
Jan 25 00:21:10.536: INFO: (2) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 14.489839ms)
Jan 25 00:21:10.536: INFO: (2) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 15.267921ms)
Jan 25 00:21:10.536: INFO: (2) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 14.678631ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 16.642312ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 16.97364ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 16.665642ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 16.976831ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 16.739161ms)
Jan 25 00:21:10.538: INFO: (2) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 25.643423ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 25.137823ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 25.119708ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 24.474924ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 25.187763ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 25.260245ms)
Jan 25 00:21:10.570: INFO: (3) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 25.356618ms)
Jan 25 00:21:10.571: INFO: (3) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 26.036906ms)
Jan 25 00:21:10.571: INFO: (3) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 26.113786ms)
Jan 25 00:21:10.571: INFO: (3) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 26.764539ms)
Jan 25 00:21:10.571: INFO: (3) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 26.446307ms)
Jan 25 00:21:10.581: INFO: (4) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 8.762753ms)
Jan 25 00:21:10.581: INFO: (4) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 9.111762ms)
Jan 25 00:21:10.583: INFO: (4) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 11.476412ms)
Jan 25 00:21:10.584: INFO: (4) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 10.04619ms)
Jan 25 00:21:10.584: INFO: (4) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 12.526346ms)
Jan 25 00:21:10.584: INFO: (4) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 12.49159ms)
Jan 25 00:21:10.585: INFO: (4) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 12.574324ms)
Jan 25 00:21:10.585: INFO: (4) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 12.898521ms)
Jan 25 00:21:10.585: INFO: (4) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 13.199543ms)
Jan 25 00:21:10.585: INFO: (4) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 13.86476ms)
Jan 25 00:21:10.585: INFO: (4) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 14.040463ms)
Jan 25 00:21:10.586: INFO: (4) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 13.543116ms)
Jan 25 00:21:10.586: INFO: (4) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 13.858229ms)
Jan 25 00:21:10.586: INFO: (4) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 23.69864ms)
Jan 25 00:21:10.616: INFO: (5) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 24.233316ms)
Jan 25 00:21:10.616: INFO: (5) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 24.573923ms)
Jan 25 00:21:10.616: INFO: (5) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 24.29191ms)
Jan 25 00:21:10.616: INFO: (5) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 24.336763ms)
Jan 25 00:21:10.616: INFO: (5) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 24.337278ms)
Jan 25 00:21:10.617: INFO: (5) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 24.330912ms)
Jan 25 00:21:10.617: INFO: (5) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 8.995526ms)
Jan 25 00:21:10.628: INFO: (6) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 9.676312ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 23.917849ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 22.541224ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 24.319077ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 22.711178ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 23.224187ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 22.623714ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 24.739014ms)
Jan 25 00:21:10.644: INFO: (6) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 23.664035ms)
Jan 25 00:21:10.669: INFO: (6) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 48.184114ms)
Jan 25 00:21:10.669: INFO: (6) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 48.635503ms)
Jan 25 00:21:10.669: INFO: (6) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 49.431054ms)
Jan 25 00:21:10.669: INFO: (6) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 48.130715ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 14.164531ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 14.151025ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 14.426375ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 14.252798ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 14.642315ms)
Jan 25 00:21:10.684: INFO: (7) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 14.260967ms)
Jan 25 00:21:10.685: INFO: (7) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 15.840623ms)
Jan 25 00:21:10.688: INFO: (7) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 18.469631ms)
Jan 25 00:21:10.688: INFO: (7) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 18.917679ms)
Jan 25 00:21:10.688: INFO: (7) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 8.983345ms)
Jan 25 00:21:10.701: INFO: (8) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 12.001967ms)
Jan 25 00:21:10.702: INFO: (8) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 12.640093ms)
Jan 25 00:21:10.703: INFO: (8) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 13.559344ms)
Jan 25 00:21:10.703: INFO: (8) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 14.004138ms)
Jan 25 00:21:10.704: INFO: (8) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 14.464533ms)
Jan 25 00:21:10.704: INFO: (8) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 14.603215ms)
Jan 25 00:21:10.704: INFO: (8) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 14.773612ms)
Jan 25 00:21:10.704: INFO: (8) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 10.539098ms)
Jan 25 00:21:10.723: INFO: (9) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 11.981324ms)
Jan 25 00:21:10.724: INFO: (9) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 12.501972ms)
Jan 25 00:21:10.725: INFO: (9) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 13.832747ms)
Jan 25 00:21:10.725: INFO: (9) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 13.772691ms)
Jan 25 00:21:10.725: INFO: (9) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 20.26098ms)
Jan 25 00:21:10.740: INFO: (10) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 11.308471ms)
Jan 25 00:21:10.743: INFO: (10) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 11.600742ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 13.013941ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 13.279708ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 13.098971ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 13.341151ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 13.442913ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 13.4619ms)
Jan 25 00:21:10.745: INFO: (10) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 13.564012ms)
Jan 25 00:21:10.747: INFO: (10) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 15.357289ms)
Jan 25 00:21:10.747: INFO: (10) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 15.568214ms)
Jan 25 00:21:10.747: INFO: (10) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 15.573904ms)
Jan 25 00:21:10.747: INFO: (10) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 15.644171ms)
Jan 25 00:21:10.753: INFO: (11) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 5.63168ms)
Jan 25 00:21:10.754: INFO: (11) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 11.218438ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 12.480243ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 12.801098ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 13.562826ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 13.197509ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 13.263423ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 13.612675ms)
Jan 25 00:21:10.761: INFO: (11) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 13.302652ms)
Jan 25 00:21:10.767: INFO: (12) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 4.904219ms)
Jan 25 00:21:10.767: INFO: (12) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 4.875811ms)
Jan 25 00:21:10.767: INFO: (12) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 5.346981ms)
Jan 25 00:21:10.768: INFO: (12) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 5.942633ms)
Jan 25 00:21:10.768: INFO: (12) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 29.881689ms)
Jan 25 00:21:10.795: INFO: (12) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 33.34667ms)
Jan 25 00:21:10.795: INFO: (12) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 33.404906ms)
Jan 25 00:21:10.795: INFO: (12) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 33.310789ms)
Jan 25 00:21:10.795: INFO: (12) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 33.507837ms)
Jan 25 00:21:10.795: INFO: (12) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 32.558852ms)
Jan 25 00:21:10.796: INFO: (12) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 34.15574ms)
Jan 25 00:21:10.804: INFO: (13) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 8.183201ms)
Jan 25 00:21:10.804: INFO: (13) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 8.317723ms)
Jan 25 00:21:10.806: INFO: (13) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 9.721027ms)
Jan 25 00:21:10.808: INFO: (13) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 11.725849ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 13.500656ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 13.513985ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 13.674334ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 13.586543ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 13.57289ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 13.621266ms)
Jan 25 00:21:10.810: INFO: (13) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 14.317972ms)
Jan 25 00:21:10.811: INFO: (13) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 14.705735ms)
Jan 25 00:21:10.811: INFO: (13) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 14.834819ms)
Jan 25 00:21:10.812: INFO: (13) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 16.229567ms)
Jan 25 00:21:10.825: INFO: (14) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 12.150458ms)
Jan 25 00:21:10.825: INFO: (14) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 14.921437ms)
Jan 25 00:21:10.828: INFO: (14) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 15.086726ms)
Jan 25 00:21:10.828: INFO: (14) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 15.431935ms)
Jan 25 00:21:10.828: INFO: (14) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 15.598882ms)
Jan 25 00:21:10.828: INFO: (14) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 15.686353ms)
Jan 25 00:21:10.828: INFO: (14) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 15.860622ms)
Jan 25 00:21:10.829: INFO: (14) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 16.087418ms)
Jan 25 00:21:10.829: INFO: (14) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 16.334899ms)
Jan 25 00:21:10.829: INFO: (14) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 16.852252ms)
Jan 25 00:21:10.830: INFO: (14) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 17.231878ms)
Jan 25 00:21:10.830: INFO: (14) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 17.285894ms)
Jan 25 00:21:10.832: INFO: (14) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 18.989383ms)
Jan 25 00:21:10.838: INFO: (15) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 5.951673ms)
Jan 25 00:21:10.840: INFO: (15) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 8.139629ms)
Jan 25 00:21:10.840: INFO: (15) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 8.622479ms)
Jan 25 00:21:10.840: INFO: (15) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 8.693739ms)
Jan 25 00:21:10.841: INFO: (15) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 8.795641ms)
Jan 25 00:21:10.841: INFO: (15) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 8.997363ms)
Jan 25 00:21:10.841: INFO: (15) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 9.780159ms)
Jan 25 00:21:10.851: INFO: (15) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 19.455551ms)
Jan 25 00:21:10.852: INFO: (15) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 20.137473ms)
Jan 25 00:21:10.852: INFO: (15) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 20.426589ms)
Jan 25 00:21:10.853: INFO: (15) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 20.880192ms)
Jan 25 00:21:10.853: INFO: (15) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 20.870204ms)
Jan 25 00:21:10.853: INFO: (15) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: ... (200; 12.655607ms)
Jan 25 00:21:10.868: INFO: (16) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 12.644064ms)
Jan 25 00:21:10.868: INFO: (16) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 15.112742ms)
Jan 25 00:21:10.871: INFO: (16) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 16.080159ms)
Jan 25 00:21:10.871: INFO: (16) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 16.245745ms)
Jan 25 00:21:10.871: INFO: (16) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 16.217061ms)
Jan 25 00:21:10.872: INFO: (16) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 16.230789ms)
Jan 25 00:21:10.872: INFO: (16) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 17.311484ms)
Jan 25 00:21:10.873: INFO: (16) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 17.122348ms)
Jan 25 00:21:10.873: INFO: (16) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 17.652028ms)
Jan 25 00:21:10.873: INFO: (16) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 17.196642ms)
Jan 25 00:21:10.873: INFO: (16) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 17.624107ms)
Jan 25 00:21:10.874: INFO: (16) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 18.677743ms)
Jan 25 00:21:10.883: INFO: (17) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 9.000748ms)
Jan 25 00:21:10.883: INFO: (17) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 9.2245ms)
Jan 25 00:21:10.883: INFO: (17) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 9.31384ms)
Jan 25 00:21:10.884: INFO: (17) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 9.89942ms)
Jan 25 00:21:10.899: INFO: (17) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 24.730898ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:1080/proxy/: test<... (200; 29.126158ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 29.188477ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 29.25922ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 29.281473ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 29.121324ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 29.281228ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 29.653157ms)
Jan 25 00:21:10.904: INFO: (17) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 29.354199ms)
Jan 25 00:21:10.906: INFO: (17) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 31.248175ms)
Jan 25 00:21:10.906: INFO: (17) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test (200; 8.206147ms)
Jan 25 00:21:10.918: INFO: (18) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 9.029158ms)
Jan 25 00:21:10.918: INFO: (18) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 8.579388ms)
Jan 25 00:21:10.918: INFO: (18) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 8.917234ms)
Jan 25 00:21:10.919: INFO: (18) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 8.981287ms)
Jan 25 00:21:10.919: INFO: (18) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 10.121789ms)
Jan 25 00:21:10.920: INFO: (18) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 10.422277ms)
Jan 25 00:21:10.920: INFO: (18) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 10.502609ms)
Jan 25 00:21:10.920: INFO: (18) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 10.839558ms)
Jan 25 00:21:10.921: INFO: (18) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 11.215357ms)
Jan 25 00:21:10.921: INFO: (18) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 11.572544ms)
Jan 25 00:21:10.923: INFO: (18) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 13.034605ms)
Jan 25 00:21:10.923: INFO: (18) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 13.085487ms)
Jan 25 00:21:10.928: INFO: (19) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz/proxy/: test (200; 4.801287ms)
Jan 25 00:21:10.934: INFO: (19) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname1/proxy/: foo (200; 11.404954ms)
Jan 25 00:21:10.935: INFO: (19) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname2/proxy/: bar (200; 11.942524ms)
Jan 25 00:21:10.936: INFO: (19) /api/v1/namespaces/proxy-8930/services/proxy-service-9rh9g:portname2/proxy/: bar (200; 12.701362ms)
Jan 25 00:21:10.936: INFO: (19) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 12.892223ms)
Jan 25 00:21:10.936: INFO: (19) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:460/proxy/: tls baz (200; 13.015607ms)
Jan 25 00:21:10.937: INFO: (19) /api/v1/namespaces/proxy-8930/services/http:proxy-service-9rh9g:portname1/proxy/: foo (200; 14.346048ms)
Jan 25 00:21:10.937: INFO: (19) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 14.336208ms)
Jan 25 00:21:10.938: INFO: (19) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname1/proxy/: tls baz (200; 14.545218ms)
Jan 25 00:21:10.938: INFO: (19) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:160/proxy/: foo (200; 14.734674ms)
Jan 25 00:21:10.938: INFO: (19) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:443/proxy/: test<... (200; 15.25656ms)
Jan 25 00:21:10.939: INFO: (19) /api/v1/namespaces/proxy-8930/services/https:proxy-service-9rh9g:tlsportname2/proxy/: tls qux (200; 15.502546ms)
Jan 25 00:21:10.939: INFO: (19) /api/v1/namespaces/proxy-8930/pods/proxy-service-9rh9g-hj8bz:162/proxy/: bar (200; 15.800323ms)
Jan 25 00:21:10.939: INFO: (19) /api/v1/namespaces/proxy-8930/pods/http:proxy-service-9rh9g-hj8bz:1080/proxy/: ... (200; 15.858097ms)
Jan 25 00:21:10.939: INFO: (19) /api/v1/namespaces/proxy-8930/pods/https:proxy-service-9rh9g-hj8bz:462/proxy/: tls qux (200; 16.065092ms)
STEP: deleting ReplicationController proxy-service-9rh9g in namespace proxy-8930, will wait for the garbage collector to delete the pods
Jan 25 00:21:11.000: INFO: Deleting ReplicationController proxy-service-9rh9g took: 6.077691ms
Jan 25 00:21:11.300: INFO: Terminating ReplicationController proxy-service-9rh9g pods took: 300.459134ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:21:22.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8930" for this suite.

• [SLOW TEST:20.171 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":109,"skipped":1699,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:21:22.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-6hlp
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 00:21:22.646: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6hlp" in namespace "subpath-5825" to be "success or failure"
Jan 25 00:21:22.670: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.175065ms
Jan 25 00:21:24.675: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028779354s
Jan 25 00:21:26.703: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056711018s
Jan 25 00:21:28.708: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062133691s
Jan 25 00:21:30.735: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 8.088988048s
Jan 25 00:21:32.850: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 10.204026479s
Jan 25 00:21:34.857: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 12.211244714s
Jan 25 00:21:36.865: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 14.218781001s
Jan 25 00:21:38.874: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 16.227421291s
Jan 25 00:21:40.879: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 18.23311256s
Jan 25 00:21:42.886: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 20.240052933s
Jan 25 00:21:44.893: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 22.247074116s
Jan 25 00:21:46.904: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 24.257399514s
Jan 25 00:21:48.912: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Running", Reason="", readiness=true. Elapsed: 26.265866621s
Jan 25 00:21:50.920: INFO: Pod "pod-subpath-test-downwardapi-6hlp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.273415085s
STEP: Saw pod success
Jan 25 00:21:50.920: INFO: Pod "pod-subpath-test-downwardapi-6hlp" satisfied condition "success or failure"
Jan 25 00:21:50.922: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-6hlp container test-container-subpath-downwardapi-6hlp: 
STEP: delete the pod
Jan 25 00:21:51.102: INFO: Waiting for pod pod-subpath-test-downwardapi-6hlp to disappear
Jan 25 00:21:51.109: INFO: Pod pod-subpath-test-downwardapi-6hlp no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-6hlp
Jan 25 00:21:51.109: INFO: Deleting pod "pod-subpath-test-downwardapi-6hlp" in namespace "subpath-5825"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:21:51.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5825" for this suite.

• [SLOW TEST:28.702 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":110,"skipped":1701,"failed":0}
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:21:51.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-f24148a9-1c28-49b4-b8d2-ab79e237975e
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:21:51.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-775" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":111,"skipped":1701,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:21:51.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Jan 25 00:21:51.445: INFO: Waiting up to 5m0s for pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1" in namespace "var-expansion-463" to be "success or failure"
Jan 25 00:21:51.458: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385428ms
Jan 25 00:21:53.463: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017969344s
Jan 25 00:21:55.469: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023090548s
Jan 25 00:21:57.475: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02924514s
Jan 25 00:21:59.480: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034969032s
STEP: Saw pod success
Jan 25 00:21:59.481: INFO: Pod "var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1" satisfied condition "success or failure"
Jan 25 00:21:59.484: INFO: Trying to get logs from node jerma-node pod var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1 container dapi-container: 
STEP: delete the pod
Jan 25 00:21:59.578: INFO: Waiting for pod var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1 to disappear
Jan 25 00:21:59.588: INFO: Pod var-expansion-52971ef0-e706-463b-9bc8-05de6b325fe1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:21:59.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-463" for this suite.

• [SLOW TEST:8.313 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1714,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:21:59.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-e90f02e5-2195-42a3-a5f4-2b8ce7c51c92
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e90f02e5-2195-42a3-a5f4-2b8ce7c51c92
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:07.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1185" for this suite.

• [SLOW TEST:8.210 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1745,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:07.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:24.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-338" for this suite.

• [SLOW TEST:16.461 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":114,"skipped":1795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:24.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-cae940c2-7682-4d8c-a05b-db59b2d6dc00
STEP: Creating a pod to test consume configMaps
Jan 25 00:22:24.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5" in namespace "configmap-5717" to be "success or failure"
Jan 25 00:22:24.397: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5": Phase="Pending", Reason="", readiness=false. Elapsed: 49.259632ms
Jan 25 00:22:26.406: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058222936s
Jan 25 00:22:28.418: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069926953s
Jan 25 00:22:30.425: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077229946s
Jan 25 00:22:32.429: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08174896s
STEP: Saw pod success
Jan 25 00:22:32.429: INFO: Pod "pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5" satisfied condition "success or failure"
Jan 25 00:22:32.433: INFO: Trying to get logs from node jerma-node pod pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5 container configmap-volume-test: 
STEP: delete the pod
Jan 25 00:22:32.480: INFO: Waiting for pod pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5 to disappear
Jan 25 00:22:32.520: INFO: Pod pod-configmaps-06d51323-edf5-4f23-b050-caced09c16c5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:32.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5717" for this suite.

• [SLOW TEST:8.257 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1861,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:32.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 00:22:32.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf" in namespace "projected-2300" to be "success or failure"
Jan 25 00:22:32.935: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf": Phase="Pending", Reason="", readiness=false. Elapsed: 45.84414ms
Jan 25 00:22:34.950: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061264874s
Jan 25 00:22:36.960: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071297952s
Jan 25 00:22:39.005: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116433439s
Jan 25 00:22:41.014: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125534s
STEP: Saw pod success
Jan 25 00:22:41.014: INFO: Pod "downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf" satisfied condition "success or failure"
Jan 25 00:22:41.019: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf container client-container: 
STEP: delete the pod
Jan 25 00:22:41.063: INFO: Waiting for pod downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf to disappear
Jan 25 00:22:41.076: INFO: Pod downwardapi-volume-456c6761-8672-4b00-bf09-c6988b3703bf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:41.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2300" for this suite.

• [SLOW TEST:8.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1863,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:41.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 25 00:22:47.687: INFO: 0 pods remaining
Jan 25 00:22:47.687: INFO: 0 pods has nil DeletionTimestamp
Jan 25 00:22:47.687: INFO: 
STEP: Gathering metrics
W0125 00:22:48.628852       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 00:22:48.629: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:48.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1574" for this suite.

• [SLOW TEST:7.555 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":117,"skipped":1871,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:48.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 25 00:22:49.453: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 25 00:22:55.027: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:22:55.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1947" for this suite.

• [SLOW TEST:8.456 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":118,"skipped":1903,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:22:57.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0125 00:23:39.931195       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 00:23:39.931: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:23:39.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4182" for this suite.

• [SLOW TEST:42.847 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":119,"skipped":1910,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:23:39.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 00:23:40.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f" in namespace "downward-api-7789" to be "success or failure"
Jan 25 00:23:40.080: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.522386ms
Jan 25 00:23:42.089: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022826922s
Jan 25 00:23:44.095: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028819729s
Jan 25 00:23:47.594: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.527920202s
Jan 25 00:23:51.324: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.258185479s
Jan 25 00:23:53.329: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.26299853s
Jan 25 00:23:56.326: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.260269177s
Jan 25 00:23:58.653: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.587429619s
Jan 25 00:24:00.825: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.759465508s
Jan 25 00:24:02.830: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.764097368s
STEP: Saw pod success
Jan 25 00:24:02.830: INFO: Pod "downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f" satisfied condition "success or failure"
Jan 25 00:24:02.833: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f container client-container: 
STEP: delete the pod
Jan 25 00:24:02.908: INFO: Waiting for pod downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f to disappear
Jan 25 00:24:02.925: INFO: Pod downwardapi-volume-145f6844-0856-4acf-8321-483db4531a6f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:24:02.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7789" for this suite.

• [SLOW TEST:22.997 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1913,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:24:02.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0125 00:24:18.818938       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 00:24:18.819: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:24:18.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8370" for this suite.

• [SLOW TEST:20.172 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":121,"skipped":1930,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:24:23.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 00:24:24.397: INFO: Waiting up to 5m0s for pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c" in namespace "downward-api-7978" to be "success or failure"
Jan 25 00:24:24.408: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.26276ms
Jan 25 00:24:26.417: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019804811s
Jan 25 00:24:28.436: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039561461s
Jan 25 00:24:30.472: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074769354s
Jan 25 00:24:32.482: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085318154s
Jan 25 00:24:35.341: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.944195444s
Jan 25 00:24:37.411: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.014499284s
Jan 25 00:24:39.419: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.021903655s
STEP: Saw pod success
Jan 25 00:24:39.419: INFO: Pod "downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c" satisfied condition "success or failure"
Jan 25 00:24:39.423: INFO: Trying to get logs from node jerma-node pod downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c container dapi-container: 
STEP: delete the pod
Jan 25 00:24:39.776: INFO: Waiting for pod downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c to disappear
Jan 25 00:24:39.812: INFO: Pod downward-api-205bd9f2-bcdc-4040-b836-f9f1dbe0065c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:24:39.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7978" for this suite.

• [SLOW TEST:16.716 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1957,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:24:39.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:24:46.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8843" for this suite.

• [SLOW TEST:6.221 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1978,"failed":0}
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:24:46.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:25:46.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4411" for this suite.

• [SLOW TEST:60.281 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1978,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:25:46.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:25:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5347" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":125,"skipped":1986,"failed":0}

------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:25:46.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-1256
STEP: creating replication controller nodeport-test in namespace services-1256
I0125 00:25:47.040337       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1256, replica count: 2
I0125 00:25:50.091205       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:25:53.091886       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:25:56.092316       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:25:59.092806       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 25 00:25:59.092: INFO: Creating new exec pod
Jan 25 00:26:06.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1256 execpodl9q8x -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 25 00:26:06.796: INFO: stderr: "I0125 00:26:06.454652    1360 log.go:172] (0xc000a66000) (0xc000b22140) Create stream\nI0125 00:26:06.454983    1360 log.go:172] (0xc000a66000) (0xc000b22140) Stream added, broadcasting: 1\nI0125 00:26:06.488968    1360 log.go:172] (0xc000a66000) Reply frame received for 1\nI0125 00:26:06.489300    1360 log.go:172] (0xc000a66000) (0xc000b221e0) Create stream\nI0125 00:26:06.489332    1360 log.go:172] (0xc000a66000) (0xc000b221e0) Stream added, broadcasting: 3\nI0125 00:26:06.497271    1360 log.go:172] (0xc000a66000) Reply frame received for 3\nI0125 00:26:06.497405    1360 log.go:172] (0xc000a66000) (0xc000b22320) Create stream\nI0125 00:26:06.497424    1360 log.go:172] (0xc000a66000) (0xc000b22320) Stream added, broadcasting: 5\nI0125 00:26:06.503598    1360 log.go:172] (0xc000a66000) Reply frame received for 5\nI0125 00:26:06.646137    1360 log.go:172] (0xc000a66000) Data frame received for 5\nI0125 00:26:06.646657    1360 log.go:172] (0xc000b22320) (5) Data frame handling\nI0125 00:26:06.646758    1360 log.go:172] (0xc000b22320) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0125 00:26:06.660087    1360 log.go:172] (0xc000a66000) Data frame received for 5\nI0125 00:26:06.660309    1360 log.go:172] (0xc000b22320) (5) Data frame handling\nI0125 00:26:06.660380    1360 log.go:172] (0xc000b22320) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0125 00:26:06.768560    1360 log.go:172] (0xc000a66000) Data frame received for 1\nI0125 00:26:06.769081    1360 log.go:172] (0xc000a66000) (0xc000b22320) Stream removed, broadcasting: 5\nI0125 00:26:06.769210    1360 log.go:172] (0xc000b22140) (1) Data frame handling\nI0125 00:26:06.769319    1360 log.go:172] (0xc000b22140) (1) Data frame sent\nI0125 00:26:06.769551    1360 log.go:172] (0xc000a66000) (0xc000b221e0) Stream removed, broadcasting: 3\nI0125 00:26:06.769674    1360 log.go:172] (0xc000a66000) (0xc000b22140) Stream removed, broadcasting: 1\nI0125 00:26:06.769752    1360 log.go:172] (0xc000a66000) Go away received\nI0125 00:26:06.772487    1360 log.go:172] (0xc000a66000) (0xc000b22140) Stream removed, broadcasting: 1\nI0125 00:26:06.772518    1360 log.go:172] (0xc000a66000) (0xc000b221e0) Stream removed, broadcasting: 3\nI0125 00:26:06.772534    1360 log.go:172] (0xc000a66000) (0xc000b22320) Stream removed, broadcasting: 5\n"
Jan 25 00:26:06.796: INFO: stdout: ""
Jan 25 00:26:06.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1256 execpodl9q8x -- /bin/sh -x -c nc -zv -t -w 2 10.96.7.65 80'
Jan 25 00:26:07.213: INFO: stderr: "I0125 00:26:06.981184    1379 log.go:172] (0xc000a60210) (0xc000aee280) Create stream\nI0125 00:26:06.981341    1379 log.go:172] (0xc000a60210) (0xc000aee280) Stream added, broadcasting: 1\nI0125 00:26:06.988509    1379 log.go:172] (0xc000a60210) Reply frame received for 1\nI0125 00:26:06.988552    1379 log.go:172] (0xc000a60210) (0xc000aee320) Create stream\nI0125 00:26:06.988561    1379 log.go:172] (0xc000a60210) (0xc000aee320) Stream added, broadcasting: 3\nI0125 00:26:06.989467    1379 log.go:172] (0xc000a60210) Reply frame received for 3\nI0125 00:26:06.989488    1379 log.go:172] (0xc000a60210) (0xc00086c000) Create stream\nI0125 00:26:06.989496    1379 log.go:172] (0xc000a60210) (0xc00086c000) Stream added, broadcasting: 5\nI0125 00:26:06.990590    1379 log.go:172] (0xc000a60210) Reply frame received for 5\nI0125 00:26:07.090777    1379 log.go:172] (0xc000a60210) Data frame received for 5\nI0125 00:26:07.090922    1379 log.go:172] (0xc00086c000) (5) Data frame handling\nI0125 00:26:07.090959    1379 log.go:172] (0xc00086c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.7.65 80\nI0125 00:26:07.099942    1379 log.go:172] (0xc000a60210) Data frame received for 5\nI0125 00:26:07.100037    1379 log.go:172] (0xc00086c000) (5) Data frame handling\nI0125 00:26:07.100062    1379 log.go:172] (0xc00086c000) (5) Data frame sent\nConnection to 10.96.7.65 80 port [tcp/http] succeeded!\nI0125 00:26:07.198428    1379 log.go:172] (0xc000a60210) (0xc000aee320) Stream removed, broadcasting: 3\nI0125 00:26:07.198985    1379 log.go:172] (0xc000a60210) Data frame received for 1\nI0125 00:26:07.199034    1379 log.go:172] (0xc000aee280) (1) Data frame handling\nI0125 00:26:07.199143    1379 log.go:172] (0xc000aee280) (1) Data frame sent\nI0125 00:26:07.199228    1379 log.go:172] (0xc000a60210) (0xc00086c000) Stream removed, broadcasting: 5\nI0125 00:26:07.199339    1379 log.go:172] (0xc000a60210) (0xc000aee280) Stream removed, broadcasting: 1\nI0125 00:26:07.199363    1379 log.go:172] (0xc000a60210) Go away received\nI0125 00:26:07.201098    1379 log.go:172] (0xc000a60210) (0xc000aee280) Stream removed, broadcasting: 1\nI0125 00:26:07.201128    1379 log.go:172] (0xc000a60210) (0xc000aee320) Stream removed, broadcasting: 3\nI0125 00:26:07.201195    1379 log.go:172] (0xc000a60210) (0xc00086c000) Stream removed, broadcasting: 5\n"
Jan 25 00:26:07.213: INFO: stdout: ""
Jan 25 00:26:07.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1256 execpodl9q8x -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30647'
Jan 25 00:26:07.519: INFO: stderr: "I0125 00:26:07.363289    1397 log.go:172] (0xc000718790) (0xc0005aa8c0) Create stream\nI0125 00:26:07.363492    1397 log.go:172] (0xc000718790) (0xc0005aa8c0) Stream added, broadcasting: 1\nI0125 00:26:07.367126    1397 log.go:172] (0xc000718790) Reply frame received for 1\nI0125 00:26:07.367160    1397 log.go:172] (0xc000718790) (0xc00046f540) Create stream\nI0125 00:26:07.367167    1397 log.go:172] (0xc000718790) (0xc00046f540) Stream added, broadcasting: 3\nI0125 00:26:07.368316    1397 log.go:172] (0xc000718790) Reply frame received for 3\nI0125 00:26:07.368331    1397 log.go:172] (0xc000718790) (0xc0006ea000) Create stream\nI0125 00:26:07.368339    1397 log.go:172] (0xc000718790) (0xc0006ea000) Stream added, broadcasting: 5\nI0125 00:26:07.369235    1397 log.go:172] (0xc000718790) Reply frame received for 5\nI0125 00:26:07.453439    1397 log.go:172] (0xc000718790) Data frame received for 5\nI0125 00:26:07.453528    1397 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0125 00:26:07.453567    1397 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0125 00:26:07.453609    1397 log.go:172] (0xc000718790) Data frame received for 5\nI0125 00:26:07.453637    1397 log.go:172] (0xc0006ea000) (5) Data frame handling\n+ nc -zv -tI0125 00:26:07.453837    1397 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0125 00:26:07.453855    1397 log.go:172] (0xc000718790) Data frame received for 5\nI0125 00:26:07.453861    1397 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0125 00:26:07.453884    1397 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0125 00:26:07.453892    1397 log.go:172] (0xc000718790) Data frame received for 5\nI0125 00:26:07.453898    1397 log.go:172] (0xc0006ea000) (5) Data frame handling\n -w 2 10.96.2.250 30647\nI0125 00:26:07.453910    1397 log.go:172] (0xc0006ea000) (5) Data frame sent\nI0125 00:26:07.454056    1397 log.go:172] (0xc000718790) Data frame received for 5\nI0125 00:26:07.454097    1397 log.go:172] (0xc0006ea000) (5) Data frame handling\nI0125 00:26:07.454127    1397 log.go:172] (0xc0006ea000) (5) Data frame sent\nConnection to 10.96.2.250 30647 port [tcp/30647] succeeded!\nI0125 00:26:07.512053    1397 log.go:172] (0xc000718790) Data frame received for 1\nI0125 00:26:07.512217    1397 log.go:172] (0xc0005aa8c0) (1) Data frame handling\nI0125 00:26:07.512256    1397 log.go:172] (0xc0005aa8c0) (1) Data frame sent\nI0125 00:26:07.512647    1397 log.go:172] (0xc000718790) (0xc00046f540) Stream removed, broadcasting: 3\nI0125 00:26:07.512732    1397 log.go:172] (0xc000718790) (0xc0005aa8c0) Stream removed, broadcasting: 1\nI0125 00:26:07.513947    1397 log.go:172] (0xc000718790) (0xc0006ea000) Stream removed, broadcasting: 5\nI0125 00:26:07.514068    1397 log.go:172] (0xc000718790) (0xc0005aa8c0) Stream removed, broadcasting: 1\nI0125 00:26:07.514074    1397 log.go:172] (0xc000718790) (0xc00046f540) Stream removed, broadcasting: 3\nI0125 00:26:07.514078    1397 log.go:172] (0xc000718790) (0xc0006ea000) Stream removed, broadcasting: 5\nI0125 00:26:07.514261    1397 log.go:172] (0xc000718790) Go away received\n"
Jan 25 00:26:07.519: INFO: stdout: ""
Jan 25 00:26:07.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1256 execpodl9q8x -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30647'
Jan 25 00:26:07.859: INFO: stderr: "I0125 00:26:07.686785    1415 log.go:172] (0xc000be0c60) (0xc000b661e0) Create stream\nI0125 00:26:07.687099    1415 log.go:172] (0xc000be0c60) (0xc000b661e0) Stream added, broadcasting: 1\nI0125 00:26:07.690807    1415 log.go:172] (0xc000be0c60) Reply frame received for 1\nI0125 00:26:07.690855    1415 log.go:172] (0xc000be0c60) (0xc000bd80a0) Create stream\nI0125 00:26:07.690863    1415 log.go:172] (0xc000be0c60) (0xc000bd80a0) Stream added, broadcasting: 3\nI0125 00:26:07.691752    1415 log.go:172] (0xc000be0c60) Reply frame received for 3\nI0125 00:26:07.691780    1415 log.go:172] (0xc000be0c60) (0xc0009b6500) Create stream\nI0125 00:26:07.691789    1415 log.go:172] (0xc000be0c60) (0xc0009b6500) Stream added, broadcasting: 5\nI0125 00:26:07.692673    1415 log.go:172] (0xc000be0c60) Reply frame received for 5\nI0125 00:26:07.769106    1415 log.go:172] (0xc000be0c60) Data frame received for 5\nI0125 00:26:07.769309    1415 log.go:172] (0xc0009b6500) (5) Data frame handling\nI0125 00:26:07.769339    1415 log.go:172] (0xc0009b6500) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30647\nI0125 00:26:07.772201    1415 log.go:172] (0xc000be0c60) Data frame received for 5\nI0125 00:26:07.772225    1415 log.go:172] (0xc0009b6500) (5) Data frame handling\nI0125 00:26:07.772242    1415 log.go:172] (0xc0009b6500) (5) Data frame sent\nConnection to 10.96.1.234 30647 port [tcp/30647] succeeded!\nI0125 00:26:07.848730    1415 log.go:172] (0xc000be0c60) Data frame received for 1\nI0125 00:26:07.848811    1415 log.go:172] (0xc000be0c60) (0xc000bd80a0) Stream removed, broadcasting: 3\nI0125 00:26:07.848863    1415 log.go:172] (0xc000b661e0) (1) Data frame handling\nI0125 00:26:07.848876    1415 log.go:172] (0xc000b661e0) (1) Data frame sent\nI0125 00:26:07.848884    1415 log.go:172] (0xc000be0c60) (0xc000b661e0) Stream removed, broadcasting: 1\nI0125 00:26:07.849779    1415 log.go:172] (0xc000be0c60) (0xc0009b6500) Stream removed, broadcasting: 5\nI0125 00:26:07.849816    1415 log.go:172] (0xc000be0c60) (0xc000b661e0) Stream removed, broadcasting: 1\nI0125 00:26:07.849825    1415 log.go:172] (0xc000be0c60) (0xc000bd80a0) Stream removed, broadcasting: 3\nI0125 00:26:07.849832    1415 log.go:172] (0xc000be0c60) (0xc0009b6500) Stream removed, broadcasting: 5\nI0125 00:26:07.849868    1415 log.go:172] (0xc000be0c60) Go away received\n"
Jan 25 00:26:07.859: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:26:07.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1256" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:21.125 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":126,"skipped":1986,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:26:07.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:26:07.947: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 25 00:26:13.031: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 00:26:17.224: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 25 00:26:19.396: INFO: Creating deployment "test-rollover-deployment"
Jan 25 00:26:19.449: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 25 00:26:21.487: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 25 00:26:21.514: INFO: Ensure that both replica sets have 1 created replica
Jan 25 00:26:21.542: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 25 00:26:21.552: INFO: Updating deployment test-rollover-deployment
Jan 25 00:26:21.552: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 25 00:26:23.597: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 25 00:26:23.654: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 25 00:26:23.671: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:23.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508781, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:25.720: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:25.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508781, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:27.692: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:27.692: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508781, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:29.695: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:29.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508788, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:31.682: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:31.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508788, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:33.691: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:33.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508788, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:35.682: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:35.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508788, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:37.682: INFO: all replica sets need to contain the pod-template-hash label
Jan 25 00:26:37.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508788, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508779, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:39.681: INFO: 
Jan 25 00:26:39.681: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 25 00:26:39.691: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-4139 /apis/apps/v1/namespaces/deployment-4139/deployments/test-rollover-deployment 866a51d4-8c52-48e0-a469-e4afddce07b1 4127480 2 2020-01-25 00:26:19 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f586c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 00:26:19 +0000 UTC,LastTransitionTime:2020-01-25 00:26:19 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-25 00:26:38 +0000 UTC,LastTransitionTime:2020-01-25 00:26:19 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 00:26:39.696: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-4139 /apis/apps/v1/namespaces/deployment-4139/replicasets/test-rollover-deployment-574d6dfbff 852912e6-f81c-4afa-8427-fcb290df0329 4127470 2 2020-01-25 00:26:21 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 866a51d4-8c52-48e0-a469-e4afddce07b1 0xc00452d027 0xc00452d028}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00452d0a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 00:26:39.696: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 25 00:26:39.696: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-4139 /apis/apps/v1/namespaces/deployment-4139/replicasets/test-rollover-controller f05d09c4-5c07-41a1-a409-785e7a90ceeb 4127479 2 2020-01-25 00:26:07 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 866a51d4-8c52-48e0-a469-e4afddce07b1 0xc00452cf37 0xc00452cf38}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00452cfa8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 00:26:39.696: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-4139 /apis/apps/v1/namespaces/deployment-4139/replicasets/test-rollover-deployment-f6c94f66c 45878978-b3be-4286-8ec7-67109034968d 4127420 2 2020-01-25 00:26:19 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 866a51d4-8c52-48e0-a469-e4afddce07b1 0xc00452d110 0xc00452d111}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00452d188  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 00:26:39.703: INFO: Pod "test-rollover-deployment-574d6dfbff-92smm" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-92smm test-rollover-deployment-574d6dfbff- deployment-4139 /api/v1/namespaces/deployment-4139/pods/test-rollover-deployment-574d6dfbff-92smm 14a1b904-3ec6-4a76-8376-9631677609b7 4127444 0 2020-01-25 00:26:21 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 852912e6-f81c-4afa-8427-fcb290df0329 0xc002e553c7 0xc002e553c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v7l6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v7l6h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v7l6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:26:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:26:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:26:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 00:26:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:26:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://47ca50a9ff6cd91adb85221cdc4a7299b5eef35307fa6450835f1663fab70523,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:26:39.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4139" for this suite.

• [SLOW TEST:31.853 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":127,"skipped":1990,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:26:39.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:26:39.935: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:26:41.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-542" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":128,"skipped":1999,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:26:41.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 25 00:26:42.111: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 25 00:26:44.125: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:46.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:48.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:50.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:26:52.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715508802, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 00:26:55.167: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:26:55.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:26:56.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2411" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:15.206 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":129,"skipped":2027,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:26:56.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Jan 25 00:26:56.911: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 25 00:26:56.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:26:57.320: INFO: stderr: ""
Jan 25 00:26:57.320: INFO: stdout: "service/agnhost-slave created\n"
Jan 25 00:26:57.321: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 25 00:26:57.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:26:57.650: INFO: stderr: ""
Jan 25 00:26:57.650: INFO: stdout: "service/agnhost-master created\n"
Jan 25 00:26:57.651: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 25 00:26:57.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:26:58.056: INFO: stderr: ""
Jan 25 00:26:58.056: INFO: stdout: "service/frontend created\n"
Jan 25 00:26:58.057: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 25 00:26:58.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:26:58.377: INFO: stderr: ""
Jan 25 00:26:58.377: INFO: stdout: "deployment.apps/frontend created\n"
Jan 25 00:26:58.377: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 25 00:26:58.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:26:58.878: INFO: stderr: ""
Jan 25 00:26:58.878: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 25 00:26:58.879: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 25 00:26:58.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8150'
Jan 25 00:27:00.415: INFO: stderr: ""
Jan 25 00:27:00.416: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 25 00:27:00.416: INFO: Waiting for all frontend pods to be Running.
Jan 25 00:27:20.467: INFO: Waiting for frontend to serve content.
Jan 25 00:27:20.495: INFO: Trying to add a new entry to the guestbook.
Jan 25 00:27:20.514: INFO: Verifying that added entry can be retrieved.
Jan 25 00:27:20.525: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 25 00:27:25.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:25.948: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:25.948: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 00:27:25.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:26.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:26.258: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 00:27:26.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:26.576: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:26.576: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 00:27:26.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:26.721: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:26.721: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 00:27:26.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:26.853: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:26.853: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 25 00:27:26.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8150'
Jan 25 00:27:26.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:27:26.988: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:27:26.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8150" for this suite.

• [SLOW TEST:30.390 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":130,"skipped":2103,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:27:27.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-e7f70b5d-98c2-4b7e-960a-a6bfe2b1a195
STEP: Creating a pod to test consume secrets
Jan 25 00:27:29.817: INFO: Waiting up to 5m0s for pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580" in namespace "secrets-6279" to be "success or failure"
Jan 25 00:27:29.868: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 50.63668ms
Jan 25 00:27:32.052: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234353196s
Jan 25 00:27:34.062: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244541s
Jan 25 00:27:36.070: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252974256s
Jan 25 00:27:38.076: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258956888s
Jan 25 00:27:40.083: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266087117s
Jan 25 00:27:42.088: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.271245104s
STEP: Saw pod success
Jan 25 00:27:42.089: INFO: Pod "pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580" satisfied condition "success or failure"
Jan 25 00:27:42.091: INFO: Trying to get logs from node jerma-node pod pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580 container secret-volume-test: 
STEP: delete the pod
Jan 25 00:27:42.186: INFO: Waiting for pod pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580 to disappear
Jan 25 00:27:42.195: INFO: Pod pod-secrets-678155ba-57d1-44c3-8e77-bea047e14580 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:27:42.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6279" for this suite.
STEP: Destroying namespace "secret-namespace-8194" for this suite.

• [SLOW TEST:15.172 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2129,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:27:42.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-sxdv
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 00:27:42.471: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sxdv" in namespace "subpath-3130" to be "success or failure"
Jan 25 00:27:42.508: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Pending", Reason="", readiness=false. Elapsed: 37.375828ms
Jan 25 00:27:44.515: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044350308s
Jan 25 00:27:46.529: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057686438s
Jan 25 00:27:48.539: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067954132s
Jan 25 00:27:50.561: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 8.089997313s
Jan 25 00:27:52.569: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 10.098289368s
Jan 25 00:27:54.574: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 12.103346106s
Jan 25 00:27:56.585: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 14.113824548s
Jan 25 00:27:58.599: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 16.128296765s
Jan 25 00:28:00.608: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 18.136903384s
Jan 25 00:28:02.614: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 20.143564701s
Jan 25 00:28:04.620: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 22.148879787s
Jan 25 00:28:06.628: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 24.157563467s
Jan 25 00:28:08.635: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 26.164197372s
Jan 25 00:28:10.641: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Running", Reason="", readiness=true. Elapsed: 28.170318808s
Jan 25 00:28:12.646: INFO: Pod "pod-subpath-test-projected-sxdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.175140776s
STEP: Saw pod success
Jan 25 00:28:12.646: INFO: Pod "pod-subpath-test-projected-sxdv" satisfied condition "success or failure"
Jan 25 00:28:12.650: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-sxdv container test-container-subpath-projected-sxdv: 
STEP: delete the pod
Jan 25 00:28:12.693: INFO: Waiting for pod pod-subpath-test-projected-sxdv to disappear
Jan 25 00:28:12.698: INFO: Pod pod-subpath-test-projected-sxdv no longer exists
STEP: Deleting pod pod-subpath-test-projected-sxdv
Jan 25 00:28:12.699: INFO: Deleting pod "pod-subpath-test-projected-sxdv" in namespace "subpath-3130"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:28:12.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3130" for this suite.

• [SLOW TEST:30.448 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":132,"skipped":2144,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:28:12.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3744.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3744.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3744.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3744.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 00:28:25.021: INFO: DNS probes using dns-3744/dns-test-95f89469-c1c6-4367-8582-d7b6512b1072 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:28:25.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3744" for this suite.

• [SLOW TEST:12.626 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":133,"skipped":2178,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:28:25.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:28:41.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2065" for this suite.

• [SLOW TEST:16.216 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":134,"skipped":2220,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:28:41.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:28:41.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 25 00:28:45.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 create -f -'
Jan 25 00:28:47.882: INFO: stderr: ""
Jan 25 00:28:47.883: INFO: stdout: "e2e-test-crd-publish-openapi-9603-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 25 00:28:47.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 delete e2e-test-crd-publish-openapi-9603-crds test-foo'
Jan 25 00:28:48.028: INFO: stderr: ""
Jan 25 00:28:48.028: INFO: stdout: "e2e-test-crd-publish-openapi-9603-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 25 00:28:48.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 apply -f -'
Jan 25 00:28:48.303: INFO: stderr: ""
Jan 25 00:28:48.304: INFO: stdout: "e2e-test-crd-publish-openapi-9603-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 25 00:28:48.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 delete e2e-test-crd-publish-openapi-9603-crds test-foo'
Jan 25 00:28:48.470: INFO: stderr: ""
Jan 25 00:28:48.470: INFO: stdout: "e2e-test-crd-publish-openapi-9603-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 25 00:28:48.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 create -f -'
Jan 25 00:28:48.745: INFO: rc: 1
Jan 25 00:28:48.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 apply -f -'
Jan 25 00:28:49.049: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 25 00:28:49.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 create -f -'
Jan 25 00:28:49.332: INFO: rc: 1
Jan 25 00:28:49.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8309 apply -f -'
Jan 25 00:28:49.608: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 25 00:28:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9603-crds'
Jan 25 00:28:49.905: INFO: stderr: ""
Jan 25 00:28:49.905: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 25 00:28:49.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9603-crds.metadata'
Jan 25 00:28:50.197: INFO: stderr: ""
Jan 25 00:28:50.197: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 25 00:28:50.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9603-crds.spec'
Jan 25 00:28:50.460: INFO: stderr: ""
Jan 25 00:28:50.460: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 25 00:28:50.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9603-crds.spec.bars'
Jan 25 00:28:50.736: INFO: stderr: ""
Jan 25 00:28:50.736: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9603-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 25 00:28:50.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9603-crds.spec.bars2'
Jan 25 00:28:51.014: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:28:53.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8309" for this suite.

• [SLOW TEST:12.387 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":135,"skipped":2222,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:28:53.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1862
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 00:28:54.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5094'
Jan 25 00:28:54.191: INFO: stderr: ""
Jan 25 00:28:54.191: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1867
Jan 25 00:28:54.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5094'
Jan 25 00:29:01.162: INFO: stderr: ""
Jan 25 00:29:01.162: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:29:01.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5094" for this suite.

• [SLOW TEST:7.282 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1858
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":136,"skipped":2240,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:29:01.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 25 00:29:01.315: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:29:12.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2396" for this suite.

• [SLOW TEST:11.853 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":137,"skipped":2250,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:29:13.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:29:21.314: INFO: Waiting up to 5m0s for pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c" in namespace "pods-8122" to be "success or failure"
Jan 25 00:29:21.339: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.154344ms
Jan 25 00:29:23.352: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038020479s
Jan 25 00:29:25.358: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043990343s
Jan 25 00:29:27.394: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079124003s
Jan 25 00:29:29.400: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085278869s
STEP: Saw pod success
Jan 25 00:29:29.400: INFO: Pod "client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c" satisfied condition "success or failure"
Jan 25 00:29:29.404: INFO: Trying to get logs from node jerma-node pod client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c container env3cont: 
STEP: delete the pod
Jan 25 00:29:29.558: INFO: Waiting for pod client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c to disappear
Jan 25 00:29:29.584: INFO: Pod client-envvars-06999466-9a2c-49b5-8a41-e076849dd36c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:29:29.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8122" for this suite.

• [SLOW TEST:16.514 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:29:29.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-4757
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 25 00:29:29.741: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 25 00:30:06.039: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4757 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 00:30:06.039: INFO: >>> kubeConfig: /root/.kube/config
I0125 00:30:06.095642       9 log.go:172] (0xc000af4210) (0xc00126c140) Create stream
I0125 00:30:06.095713       9 log.go:172] (0xc000af4210) (0xc00126c140) Stream added, broadcasting: 1
I0125 00:30:06.101110       9 log.go:172] (0xc000af4210) Reply frame received for 1
I0125 00:30:06.101165       9 log.go:172] (0xc000af4210) (0xc001c4a820) Create stream
I0125 00:30:06.101189       9 log.go:172] (0xc000af4210) (0xc001c4a820) Stream added, broadcasting: 3
I0125 00:30:06.104678       9 log.go:172] (0xc000af4210) Reply frame received for 3
I0125 00:30:06.104703       9 log.go:172] (0xc000af4210) (0xc002b999a0) Create stream
I0125 00:30:06.104711       9 log.go:172] (0xc000af4210) (0xc002b999a0) Stream added, broadcasting: 5
I0125 00:30:06.107769       9 log.go:172] (0xc000af4210) Reply frame received for 5
I0125 00:30:07.185135       9 log.go:172] (0xc000af4210) Data frame received for 3
I0125 00:30:07.185181       9 log.go:172] (0xc001c4a820) (3) Data frame handling
I0125 00:30:07.185204       9 log.go:172] (0xc001c4a820) (3) Data frame sent
I0125 00:30:07.285920       9 log.go:172] (0xc000af4210) Data frame received for 1
I0125 00:30:07.285994       9 log.go:172] (0xc000af4210) (0xc002b999a0) Stream removed, broadcasting: 5
I0125 00:30:07.286030       9 log.go:172] (0xc00126c140) (1) Data frame handling
I0125 00:30:07.286101       9 log.go:172] (0xc00126c140) (1) Data frame sent
I0125 00:30:07.286137       9 log.go:172] (0xc000af4210) (0xc001c4a820) Stream removed, broadcasting: 3
I0125 00:30:07.286176       9 log.go:172] (0xc000af4210) (0xc00126c140) Stream removed, broadcasting: 1
I0125 00:30:07.286203       9 log.go:172] (0xc000af4210) Go away received
I0125 00:30:07.287067       9 log.go:172] (0xc000af4210) (0xc00126c140) Stream removed, broadcasting: 1
I0125 00:30:07.287087       9 log.go:172] (0xc000af4210) (0xc001c4a820) Stream removed, broadcasting: 3
I0125 00:30:07.287108       9 log.go:172] (0xc000af4210) (0xc002b999a0) Stream removed, broadcasting: 5
Jan 25 00:30:07.287: INFO: Found all expected endpoints: [netserver-0]
Jan 25 00:30:07.293: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4757 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 00:30:07.293: INFO: >>> kubeConfig: /root/.kube/config
I0125 00:30:07.345253       9 log.go:172] (0xc000af48f0) (0xc00126c820) Create stream
I0125 00:30:07.345373       9 log.go:172] (0xc000af48f0) (0xc00126c820) Stream added, broadcasting: 1
I0125 00:30:07.363433       9 log.go:172] (0xc000af48f0) Reply frame received for 1
I0125 00:30:07.363564       9 log.go:172] (0xc000af48f0) (0xc00126c960) Create stream
I0125 00:30:07.363598       9 log.go:172] (0xc000af48f0) (0xc00126c960) Stream added, broadcasting: 3
I0125 00:30:07.365683       9 log.go:172] (0xc000af48f0) Reply frame received for 3
I0125 00:30:07.365726       9 log.go:172] (0xc000af48f0) (0xc00295e780) Create stream
I0125 00:30:07.365740       9 log.go:172] (0xc000af48f0) (0xc00295e780) Stream added, broadcasting: 5
I0125 00:30:07.369694       9 log.go:172] (0xc000af48f0) Reply frame received for 5
I0125 00:30:08.472226       9 log.go:172] (0xc000af48f0) Data frame received for 3
I0125 00:30:08.472424       9 log.go:172] (0xc00126c960) (3) Data frame handling
I0125 00:30:08.472462       9 log.go:172] (0xc00126c960) (3) Data frame sent
I0125 00:30:08.618097       9 log.go:172] (0xc000af48f0) (0xc00126c960) Stream removed, broadcasting: 3
I0125 00:30:08.618364       9 log.go:172] (0xc000af48f0) Data frame received for 1
I0125 00:30:08.618383       9 log.go:172] (0xc00126c820) (1) Data frame handling
I0125 00:30:08.618400       9 log.go:172] (0xc00126c820) (1) Data frame sent
I0125 00:30:08.618409       9 log.go:172] (0xc000af48f0) (0xc00126c820) Stream removed, broadcasting: 1
I0125 00:30:08.618691       9 log.go:172] (0xc000af48f0) (0xc00295e780) Stream removed, broadcasting: 5
I0125 00:30:08.618738       9 log.go:172] (0xc000af48f0) (0xc00126c820) Stream removed, broadcasting: 1
I0125 00:30:08.618746       9 log.go:172] (0xc000af48f0) (0xc00126c960) Stream removed, broadcasting: 3
I0125 00:30:08.618751       9 log.go:172] (0xc000af48f0) (0xc00295e780) Stream removed, broadcasting: 5
I0125 00:30:08.618903       9 log.go:172] (0xc000af48f0) Go away received
Jan 25 00:30:08.619: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:30:08.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4757" for this suite.

• [SLOW TEST:39.073 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:30:08.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-96c5576d-6163-4400-a538-131b5c4faf07
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:30:08.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-427" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":140,"skipped":2358,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:30:08.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-d446647d-9515-4305-b907-aa3e1e9aadd3
STEP: Creating a pod to test consume secrets
Jan 25 00:30:09.009: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d" in namespace "projected-4284" to be "success or failure"
Jan 25 00:30:09.042: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.487092ms
Jan 25 00:30:11.047: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037906176s
Jan 25 00:30:13.053: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044128726s
Jan 25 00:30:15.067: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058191026s
Jan 25 00:30:18.225: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.216322182s
Jan 25 00:30:20.231: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.22249422s
Jan 25 00:30:22.237: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.228483194s
STEP: Saw pod success
Jan 25 00:30:22.238: INFO: Pod "pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d" satisfied condition "success or failure"
Jan 25 00:30:22.241: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 00:30:22.286: INFO: Waiting for pod pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d to disappear
Jan 25 00:30:22.312: INFO: Pod pod-projected-secrets-32ebfebf-a682-4513-87d4-62cdb2307f2d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:30:22.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4284" for this suite.

• [SLOW TEST:13.480 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2361,"failed":0}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:30:22.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-gp5m
STEP: Creating a pod to test atomic-volume-subpath
Jan 25 00:30:22.542: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gp5m" in namespace "subpath-5282" to be "success or failure"
Jan 25 00:30:22.640: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Pending", Reason="", readiness=false. Elapsed: 98.043137ms
Jan 25 00:30:24.645: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103648761s
Jan 25 00:30:26.654: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112147545s
Jan 25 00:30:28.661: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119285652s
Jan 25 00:30:30.671: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 8.129087932s
Jan 25 00:30:32.678: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 10.136526976s
Jan 25 00:30:34.687: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 12.144987864s
Jan 25 00:30:36.696: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 14.154037268s
Jan 25 00:30:38.701: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 16.159476608s
Jan 25 00:30:40.708: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 18.166251364s
Jan 25 00:30:42.717: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 20.17492852s
Jan 25 00:30:44.726: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 22.183710686s
Jan 25 00:30:46.732: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 24.189907496s
Jan 25 00:30:48.738: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 26.195808566s
Jan 25 00:30:50.762: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Running", Reason="", readiness=true. Elapsed: 28.219723843s
Jan 25 00:30:52.771: INFO: Pod "pod-subpath-test-configmap-gp5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.229146817s
STEP: Saw pod success
Jan 25 00:30:52.771: INFO: Pod "pod-subpath-test-configmap-gp5m" satisfied condition "success or failure"
Jan 25 00:30:52.774: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-gp5m container test-container-subpath-configmap-gp5m: 
STEP: delete the pod
Jan 25 00:30:52.830: INFO: Waiting for pod pod-subpath-test-configmap-gp5m to disappear
Jan 25 00:30:52.839: INFO: Pod pod-subpath-test-configmap-gp5m no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gp5m
Jan 25 00:30:52.839: INFO: Deleting pod "pod-subpath-test-configmap-gp5m" in namespace "subpath-5282"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:30:52.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5282" for this suite.

• [SLOW TEST:30.535 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":142,"skipped":2366,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:30:52.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 00:31:05.089: INFO: DNS probes using dns-test-a2770173-405f-48fa-850b-7efa3bfb3481 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 00:31:17.206: INFO: File wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:17.219: INFO: File jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:17.219: INFO: Lookups using dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c failed for: [wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local]

Jan 25 00:31:22.230: INFO: File wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:22.234: INFO: File jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:22.234: INFO: Lookups using dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c failed for: [wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local]

Jan 25 00:31:27.225: INFO: File wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:27.229: INFO: File jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local from pod  dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 25 00:31:27.229: INFO: Lookups using dns-3493/dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c failed for: [wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local]

Jan 25 00:31:32.234: INFO: DNS probes using dns-test-0cb4d568-2762-4c98-a9c7-05bcbe196c7c succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3493.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3493.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 00:31:46.481: INFO: DNS probes using dns-test-89955e19-1eaa-48d8-881a-5e6676129883 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:31:46.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3493" for this suite.

• [SLOW TEST:53.810 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":143,"skipped":2374,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:31:46.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:31:46.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 25 00:31:50.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 create -f -'
Jan 25 00:31:53.953: INFO: stderr: ""
Jan 25 00:31:53.953: INFO: stdout: "e2e-test-crd-publish-openapi-5131-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 25 00:31:53.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 delete e2e-test-crd-publish-openapi-5131-crds test-cr'
Jan 25 00:31:54.210: INFO: stderr: ""
Jan 25 00:31:54.210: INFO: stdout: "e2e-test-crd-publish-openapi-5131-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 25 00:31:54.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 apply -f -'
Jan 25 00:31:54.504: INFO: stderr: ""
Jan 25 00:31:54.504: INFO: stdout: "e2e-test-crd-publish-openapi-5131-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 25 00:31:54.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8199 delete e2e-test-crd-publish-openapi-5131-crds test-cr'
Jan 25 00:31:54.645: INFO: stderr: ""
Jan 25 00:31:54.645: INFO: stdout: "e2e-test-crd-publish-openapi-5131-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 25 00:31:54.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5131-crds'
Jan 25 00:31:54.884: INFO: stderr: ""
Jan 25 00:31:54.884: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5131-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:31:57.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8199" for this suite.

• [SLOW TEST:11.189 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":144,"skipped":2374,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:31:57.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:32:04.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9513" for this suite.

• [SLOW TEST:6.169 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2391,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:32:04.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5336
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5336
STEP: creating replication controller externalsvc in namespace services-5336
I0125 00:32:04.360044       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5336, replica count: 2
I0125 00:32:07.410691       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:32:10.410946       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:32:13.411262       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 00:32:16.411658       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 25 00:32:16.468: INFO: Creating new exec pod
Jan 25 00:32:24.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5336 execpodn4wnr -- /bin/sh -x -c nslookup clusterip-service'
Jan 25 00:32:24.952: INFO: stderr: "I0125 00:32:24.753460    2090 log.go:172] (0xc0008b4dc0) (0xc000a501e0) Create stream\nI0125 00:32:24.753852    2090 log.go:172] (0xc0008b4dc0) (0xc000a501e0) Stream added, broadcasting: 1\nI0125 00:32:24.763039    2090 log.go:172] (0xc0008b4dc0) Reply frame received for 1\nI0125 00:32:24.763121    2090 log.go:172] (0xc0008b4dc0) (0xc000a50280) Create stream\nI0125 00:32:24.763132    2090 log.go:172] (0xc0008b4dc0) (0xc000a50280) Stream added, broadcasting: 3\nI0125 00:32:24.764746    2090 log.go:172] (0xc0008b4dc0) Reply frame received for 3\nI0125 00:32:24.764777    2090 log.go:172] (0xc0008b4dc0) (0xc000a503c0) Create stream\nI0125 00:32:24.764789    2090 log.go:172] (0xc0008b4dc0) (0xc000a503c0) Stream added, broadcasting: 5\nI0125 00:32:24.766400    2090 log.go:172] (0xc0008b4dc0) Reply frame received for 5\nI0125 00:32:24.835374    2090 log.go:172] (0xc0008b4dc0) Data frame received for 5\nI0125 00:32:24.835425    2090 log.go:172] (0xc000a503c0) (5) Data frame handling\nI0125 00:32:24.835452    2090 log.go:172] (0xc000a503c0) (5) Data frame sent\n+ nslookup clusterip-service\nI0125 00:32:24.856957    2090 log.go:172] (0xc0008b4dc0) Data frame received for 3\nI0125 00:32:24.857102    2090 log.go:172] (0xc000a50280) (3) Data frame handling\nI0125 00:32:24.857137    2090 log.go:172] (0xc000a50280) (3) Data frame sent\nI0125 00:32:24.859461    2090 log.go:172] (0xc0008b4dc0) Data frame received for 3\nI0125 00:32:24.859473    2090 log.go:172] (0xc000a50280) (3) Data frame handling\nI0125 00:32:24.859493    2090 log.go:172] (0xc000a50280) (3) Data frame sent\nI0125 00:32:24.944872    2090 log.go:172] (0xc0008b4dc0) (0xc000a50280) Stream removed, broadcasting: 3\nI0125 00:32:24.945063    2090 log.go:172] (0xc0008b4dc0) Data frame received for 1\nI0125 00:32:24.945077    2090 log.go:172] (0xc000a501e0) (1) Data frame handling\nI0125 00:32:24.945105    2090 log.go:172] (0xc000a501e0) (1) Data frame sent\nI0125 00:32:24.945227    2090 log.go:172] (0xc0008b4dc0) (0xc000a501e0) Stream removed, broadcasting: 1\nI0125 00:32:24.945330    2090 log.go:172] (0xc0008b4dc0) (0xc000a503c0) Stream removed, broadcasting: 5\nI0125 00:32:24.945403    2090 log.go:172] (0xc0008b4dc0) Go away received\nI0125 00:32:24.947075    2090 log.go:172] (0xc0008b4dc0) (0xc000a501e0) Stream removed, broadcasting: 1\nI0125 00:32:24.947090    2090 log.go:172] (0xc0008b4dc0) (0xc000a50280) Stream removed, broadcasting: 3\nI0125 00:32:24.947097    2090 log.go:172] (0xc0008b4dc0) (0xc000a503c0) Stream removed, broadcasting: 5\n"
Jan 25 00:32:24.952: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5336.svc.cluster.local\tcanonical name = externalsvc.services-5336.svc.cluster.local.\nName:\texternalsvc.services-5336.svc.cluster.local\nAddress: 10.96.153.45\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5336, will wait for the garbage collector to delete the pods
Jan 25 00:32:25.016: INFO: Deleting ReplicationController externalsvc took: 8.778747ms
Jan 25 00:32:25.316: INFO: Terminating ReplicationController externalsvc pods took: 300.385852ms
Jan 25 00:32:33.601: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:32:33.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5336" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:29.617 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":146,"skipped":2397,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:32:33.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Jan 25 00:32:43.842: INFO: Pod pod-hostip-1aef5f90-244a-4503-9e67-cdb278fa6dcc has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:32:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1416" for this suite.

• [SLOW TEST:10.259 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2399,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:32:43.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 25 00:32:43.979: INFO: Waiting up to 5m0s for pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a" in namespace "emptydir-2706" to be "success or failure"
Jan 25 00:32:43.994: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.089381ms
Jan 25 00:32:46.000: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020283952s
Jan 25 00:32:48.005: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02562669s
Jan 25 00:32:50.009: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029934591s
Jan 25 00:32:52.029: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04917981s
STEP: Saw pod success
Jan 25 00:32:52.029: INFO: Pod "pod-5f67dac2-5587-4c42-8990-588b244bc59a" satisfied condition "success or failure"
Jan 25 00:32:52.049: INFO: Trying to get logs from node jerma-node pod pod-5f67dac2-5587-4c42-8990-588b244bc59a container test-container: 
STEP: delete the pod
Jan 25 00:32:52.197: INFO: Waiting for pod pod-5f67dac2-5587-4c42-8990-588b244bc59a to disappear
Jan 25 00:32:52.217: INFO: Pod pod-5f67dac2-5587-4c42-8990-588b244bc59a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:32:52.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2706" for this suite.

• [SLOW TEST:8.348 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2401,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:32:52.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 00:32:52.435: INFO: Waiting up to 5m0s for pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70" in namespace "emptydir-6506" to be "success or failure"
Jan 25 00:32:52.451: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70": Phase="Pending", Reason="", readiness=false. Elapsed: 15.589961ms
Jan 25 00:32:54.457: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020996027s
Jan 25 00:32:56.464: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028038809s
Jan 25 00:32:58.470: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03433417s
Jan 25 00:33:00.481: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045178715s
STEP: Saw pod success
Jan 25 00:33:00.481: INFO: Pod "pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70" satisfied condition "success or failure"
Jan 25 00:33:00.487: INFO: Trying to get logs from node jerma-node pod pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70 container test-container: 
STEP: delete the pod
Jan 25 00:33:00.891: INFO: Waiting for pod pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70 to disappear
Jan 25 00:33:00.897: INFO: Pod pod-83548c80-3e08-4e6d-ba9d-ef2d7eaa6e70 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:33:00.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6506" for this suite.

• [SLOW TEST:8.667 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2419,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:33:00.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:33:06.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8866" for this suite.

• [SLOW TEST:5.965 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":150,"skipped":2420,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:33:06.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:43
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 25 00:33:20.117: INFO: start=2020-01-25 00:33:15.097157917 +0000 UTC m=+3259.718731832, now=2020-01-25 00:33:20.117801131 +0000 UTC m=+3264.739375111, kubelet pod: {"metadata":{"name":"pod-submit-remove-719f278b-a761-46f5-b343-c912f135e699","namespace":"pods-3514","selfLink":"/api/v1/namespaces/pods-3514/pods/pod-submit-remove-719f278b-a761-46f5-b343-c912f135e699","uid":"16b244de-946f-42e3-b1cf-74411c62d2ff","resourceVersion":"4129477","creationTimestamp":"2020-01-25T00:33:07Z","deletionTimestamp":"2020-01-25T00:33:45Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"993973886"},"annotations":{"kubernetes.io/config.seen":"2020-01-25T00:33:07.016516566Z","kubernetes.io/config.source":"api"}},"spec":{"volumes":[{"name":"default-token-fvnrg","secret":{"secretName":"default-token-fvnrg","defaultMode":420}}],"containers":[{"name":"agnhost","image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-fvnrg","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"jerma-node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Pending","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-25T00:33:07Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-01-25T00:33:19Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-01-25T00:33:19Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-25T00:33:07Z"}],"hostIP":"10.96.2.250","podIP":"10.44.0.1","podIPs":[{"ip":"10.44.0.1"}],"startTime":"2020-01-25T00:33:07Z","containerStatuses":[{"name":"agnhost","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{},"ready":false,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","imageID":"","started":false}],"qosClass":"BestEffort"}}
Jan 25 00:33:25.104: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:33:25.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3514" for this suite.

• [SLOW TEST:18.233 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":151,"skipped":2443,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:33:25.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:33:25.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1300'
Jan 25 00:33:25.645: INFO: stderr: ""
Jan 25 00:33:25.645: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 25 00:33:25.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1300'
Jan 25 00:33:26.138: INFO: stderr: ""
Jan 25 00:33:26.138: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 25 00:33:27.145: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:27.145: INFO: Found 0 / 1
Jan 25 00:33:28.146: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:28.146: INFO: Found 0 / 1
Jan 25 00:33:29.142: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:29.143: INFO: Found 0 / 1
Jan 25 00:33:30.147: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:30.147: INFO: Found 0 / 1
Jan 25 00:33:31.156: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:31.156: INFO: Found 0 / 1
Jan 25 00:33:32.150: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:32.150: INFO: Found 0 / 1
Jan 25 00:33:33.148: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:33.148: INFO: Found 1 / 1
Jan 25 00:33:33.148: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 25 00:33:33.153: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:33:33.153: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 00:33:33.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-llzt4 --namespace=kubectl-1300'
Jan 25 00:33:33.410: INFO: stderr: ""
Jan 25 00:33:33.410: INFO: stdout: "Name:         agnhost-master-llzt4\nNamespace:    kubectl-1300\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Sat, 25 Jan 2020 00:33:25 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://16dc0de9f40a5db47d485a0d099da252f9cac558ec485bfc8adc8ea6b9499a9b\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 25 Jan 2020 00:33:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qpqt7 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-qpqt7:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-qpqt7\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-1300/agnhost-master-llzt4 to jerma-node\n  Normal  Pulled     4s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 25 00:33:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1300'
Jan 25 00:33:33.534: INFO: stderr: ""
Jan 25 00:33:33.534: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1300\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-llzt4\n"
Jan 25 00:33:33.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1300'
Jan 25 00:33:33.672: INFO: stderr: ""
Jan 25 00:33:33.672: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1300\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.157.198\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 25 00:33:33.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 25 00:33:33.836: INFO: stderr: ""
Jan 25 00:33:33.836: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Sat, 25 Jan 2020 00:33:29 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 25 Jan 2020 00:31:47 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 25 Jan 2020 00:31:47 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 25 Jan 2020 00:31:47 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 25 Jan 2020 00:31:47 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         20d\n  kubectl-1300                agnhost-master-llzt4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 25 00:33:33.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1300'
Jan 25 00:33:33.976: INFO: stderr: ""
Jan 25 00:33:33.976: INFO: stdout: "Name:         kubectl-1300\nLabels:       e2e-framework=kubectl\n              e2e-run=997416f5-d161-4209-ae7b-e3b49d7df842\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:33:33.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1300" for this suite.

• [SLOW TEST:8.866 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1155
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":152,"skipped":2459,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:33:33.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 00:33:42.705: INFO: Successfully updated pod "annotationupdatec1f4d405-adb6-481f-a5b3-49663da12807"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:33:44.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5356" for this suite.

• [SLOW TEST:10.779 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2508,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:33:44.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6736
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating stateful set ss in namespace statefulset-6736
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6736
Jan 25 00:33:44.933: INFO: Found 0 stateful pods, waiting for 1
Jan 25 00:33:54.939: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 25 00:33:54.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:33:55.378: INFO: stderr: "I0125 00:33:55.188435    2252 log.go:172] (0xc000aa80b0) (0xc00023e1e0) Create stream\nI0125 00:33:55.188759    2252 log.go:172] (0xc000aa80b0) (0xc00023e1e0) Stream added, broadcasting: 1\nI0125 00:33:55.192165    2252 log.go:172] (0xc000aa80b0) Reply frame received for 1\nI0125 00:33:55.192199    2252 log.go:172] (0xc000aa80b0) (0xc000864000) Create stream\nI0125 00:33:55.192208    2252 log.go:172] (0xc000aa80b0) (0xc000864000) Stream added, broadcasting: 3\nI0125 00:33:55.193323    2252 log.go:172] (0xc000aa80b0) Reply frame received for 3\nI0125 00:33:55.193395    2252 log.go:172] (0xc000aa80b0) (0xc00023e280) Create stream\nI0125 00:33:55.193405    2252 log.go:172] (0xc000aa80b0) (0xc00023e280) Stream added, broadcasting: 5\nI0125 00:33:55.194875    2252 log.go:172] (0xc000aa80b0) Reply frame received for 5\nI0125 00:33:55.264320    2252 log.go:172] (0xc000aa80b0) Data frame received for 5\nI0125 00:33:55.264421    2252 log.go:172] (0xc00023e280) (5) Data frame handling\nI0125 00:33:55.264463    2252 log.go:172] (0xc00023e280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:33:55.298048    2252 log.go:172] (0xc000aa80b0) Data frame received for 3\nI0125 00:33:55.298086    2252 log.go:172] (0xc000864000) (3) Data frame handling\nI0125 00:33:55.298107    2252 log.go:172] (0xc000864000) (3) Data frame sent\nI0125 00:33:55.370784    2252 log.go:172] (0xc000aa80b0) Data frame received for 1\nI0125 00:33:55.370852    2252 log.go:172] (0xc00023e1e0) (1) Data frame handling\nI0125 00:33:55.370886    2252 log.go:172] (0xc00023e1e0) (1) Data frame sent\nI0125 00:33:55.370955    2252 log.go:172] (0xc000aa80b0) (0xc00023e280) Stream removed, broadcasting: 5\nI0125 00:33:55.371060    2252 log.go:172] (0xc000aa80b0) (0xc000864000) Stream removed, broadcasting: 3\nI0125 00:33:55.371102    2252 log.go:172] (0xc000aa80b0) (0xc00023e1e0) Stream removed, broadcasting: 1\nI0125 00:33:55.371154    2252 log.go:172] (0xc000aa80b0) Go away received\nI0125 00:33:55.372327    2252 log.go:172] (0xc000aa80b0) (0xc00023e1e0) Stream removed, broadcasting: 1\nI0125 00:33:55.372348    2252 log.go:172] (0xc000aa80b0) (0xc000864000) Stream removed, broadcasting: 3\nI0125 00:33:55.372358    2252 log.go:172] (0xc000aa80b0) (0xc00023e280) Stream removed, broadcasting: 5\n"
Jan 25 00:33:55.378: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:33:55.378: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:33:55.383: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 00:34:05.388: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:34:05.388: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:34:05.416: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 25 00:34:05.416: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:05.416: INFO: 
Jan 25 00:34:05.416: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 25 00:34:07.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987488025s
Jan 25 00:34:09.142: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.803046813s
Jan 25 00:34:10.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.261054467s
Jan 25 00:34:12.251: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.253594308s
Jan 25 00:34:13.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.15270747s
Jan 25 00:34:14.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 837.020764ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6736
Jan 25 00:34:15.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:34:15.965: INFO: stderr: "I0125 00:34:15.792980    2272 log.go:172] (0xc000ac6000) (0xc0008ec000) Create stream\nI0125 00:34:15.793135    2272 log.go:172] (0xc000ac6000) (0xc0008ec000) Stream added, broadcasting: 1\nI0125 00:34:15.797403    2272 log.go:172] (0xc000ac6000) Reply frame received for 1\nI0125 00:34:15.797457    2272 log.go:172] (0xc000ac6000) (0xc000718000) Create stream\nI0125 00:34:15.797464    2272 log.go:172] (0xc000ac6000) (0xc000718000) Stream added, broadcasting: 3\nI0125 00:34:15.799731    2272 log.go:172] (0xc000ac6000) Reply frame received for 3\nI0125 00:34:15.799866    2272 log.go:172] (0xc000ac6000) (0xc00091a1e0) Create stream\nI0125 00:34:15.800028    2272 log.go:172] (0xc000ac6000) (0xc00091a1e0) Stream added, broadcasting: 5\nI0125 00:34:15.804422    2272 log.go:172] (0xc000ac6000) Reply frame received for 5\nI0125 00:34:15.872953    2272 log.go:172] (0xc000ac6000) Data frame received for 5\nI0125 00:34:15.873015    2272 log.go:172] (0xc00091a1e0) (5) Data frame handling\nI0125 00:34:15.873037    2272 log.go:172] (0xc00091a1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:34:15.874039    2272 log.go:172] (0xc000ac6000) Data frame received for 3\nI0125 00:34:15.874119    2272 log.go:172] (0xc000718000) (3) Data frame handling\nI0125 00:34:15.874152    2272 log.go:172] (0xc000718000) (3) Data frame sent\nI0125 00:34:15.954225    2272 log.go:172] (0xc000ac6000) Data frame received for 1\nI0125 00:34:15.954507    2272 log.go:172] (0xc000ac6000) (0xc000718000) Stream removed, broadcasting: 3\nI0125 00:34:15.954606    2272 log.go:172] (0xc0008ec000) (1) Data frame handling\nI0125 00:34:15.954677    2272 log.go:172] (0xc0008ec000) (1) Data frame sent\nI0125 00:34:15.954748    2272 log.go:172] (0xc000ac6000) (0xc00091a1e0) Stream removed, broadcasting: 5\nI0125 00:34:15.954828    2272 log.go:172] (0xc000ac6000) (0xc0008ec000) Stream removed, broadcasting: 1\nI0125 00:34:15.954862    2272 log.go:172] (0xc000ac6000) Go away received\nI0125 00:34:15.957335    2272 log.go:172] (0xc000ac6000) (0xc0008ec000) Stream removed, broadcasting: 1\nI0125 00:34:15.957352    2272 log.go:172] (0xc000ac6000) (0xc000718000) Stream removed, broadcasting: 3\nI0125 00:34:15.957365    2272 log.go:172] (0xc000ac6000) (0xc00091a1e0) Stream removed, broadcasting: 5\n"
Jan 25 00:34:15.965: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:34:15.965: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:34:15.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:34:16.290: INFO: stderr: "I0125 00:34:16.110277    2292 log.go:172] (0xc0003caf20) (0xc0009b2140) Create stream\nI0125 00:34:16.110464    2292 log.go:172] (0xc0003caf20) (0xc0009b2140) Stream added, broadcasting: 1\nI0125 00:34:16.113340    2292 log.go:172] (0xc0003caf20) Reply frame received for 1\nI0125 00:34:16.113411    2292 log.go:172] (0xc0003caf20) (0xc000787540) Create stream\nI0125 00:34:16.113421    2292 log.go:172] (0xc0003caf20) (0xc000787540) Stream added, broadcasting: 3\nI0125 00:34:16.114903    2292 log.go:172] (0xc0003caf20) Reply frame received for 3\nI0125 00:34:16.114943    2292 log.go:172] (0xc0003caf20) (0xc0009b2280) Create stream\nI0125 00:34:16.114949    2292 log.go:172] (0xc0003caf20) (0xc0009b2280) Stream added, broadcasting: 5\nI0125 00:34:16.117611    2292 log.go:172] (0xc0003caf20) Reply frame received for 5\nI0125 00:34:16.181229    2292 log.go:172] (0xc0003caf20) Data frame received for 5\nI0125 00:34:16.181286    2292 log.go:172] (0xc0009b2280) (5) Data frame handling\nI0125 00:34:16.181309    2292 log.go:172] (0xc0009b2280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:34:16.183279    2292 log.go:172] (0xc0003caf20) Data frame received for 5\nI0125 00:34:16.183299    2292 log.go:172] (0xc0009b2280) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0125 00:34:16.183313    2292 log.go:172] (0xc0003caf20) Data frame received for 3\nI0125 00:34:16.183326    2292 log.go:172] (0xc000787540) (3) Data frame handling\nI0125 00:34:16.183335    2292 log.go:172] (0xc000787540) (3) Data frame sent\nI0125 00:34:16.183359    2292 log.go:172] (0xc0009b2280) (5) Data frame sent\nI0125 00:34:16.183366    2292 log.go:172] (0xc0003caf20) Data frame received for 5\nI0125 00:34:16.183369    2292 log.go:172] (0xc0009b2280) (5) Data frame handling\nI0125 00:34:16.183374    2292 log.go:172] (0xc0009b2280) (5) Data frame sent\nI0125 00:34:16.183379    2292 log.go:172] (0xc0003caf20) Data frame received for 5\nI0125 00:34:16.183383    2292 log.go:172] (0xc0009b2280) (5) Data frame handling\n+ true\nI0125 00:34:16.183395    2292 log.go:172] (0xc0009b2280) (5) Data frame sent\nI0125 00:34:16.279216    2292 log.go:172] (0xc0003caf20) Data frame received for 1\nI0125 00:34:16.279302    2292 log.go:172] (0xc0003caf20) (0xc0009b2280) Stream removed, broadcasting: 5\nI0125 00:34:16.279360    2292 log.go:172] (0xc0009b2140) (1) Data frame handling\nI0125 00:34:16.279388    2292 log.go:172] (0xc0009b2140) (1) Data frame sent\nI0125 00:34:16.279440    2292 log.go:172] (0xc0003caf20) (0xc000787540) Stream removed, broadcasting: 3\nI0125 00:34:16.279488    2292 log.go:172] (0xc0003caf20) (0xc0009b2140) Stream removed, broadcasting: 1\nI0125 00:34:16.279516    2292 log.go:172] (0xc0003caf20) Go away received\nI0125 00:34:16.280362    2292 log.go:172] (0xc0003caf20) (0xc0009b2140) Stream removed, broadcasting: 1\nI0125 00:34:16.280387    2292 log.go:172] (0xc0003caf20) (0xc000787540) Stream removed, broadcasting: 3\nI0125 00:34:16.280395    2292 log.go:172] (0xc0003caf20) (0xc0009b2280) Stream removed, broadcasting: 5\n"
Jan 25 00:34:16.290: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:34:16.290: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:34:16.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:34:16.727: INFO: stderr: "I0125 00:34:16.511740    2315 log.go:172] (0xc000b04000) (0xc00076e000) Create stream\nI0125 00:34:16.512126    2315 log.go:172] (0xc000b04000) (0xc00076e000) Stream added, broadcasting: 1\nI0125 00:34:16.517493    2315 log.go:172] (0xc000b04000) Reply frame received for 1\nI0125 00:34:16.517568    2315 log.go:172] (0xc000b04000) (0xc00091df40) Create stream\nI0125 00:34:16.517577    2315 log.go:172] (0xc000b04000) (0xc00091df40) Stream added, broadcasting: 3\nI0125 00:34:16.518928    2315 log.go:172] (0xc000b04000) Reply frame received for 3\nI0125 00:34:16.518963    2315 log.go:172] (0xc000b04000) (0xc0007ee000) Create stream\nI0125 00:34:16.518977    2315 log.go:172] (0xc000b04000) (0xc0007ee000) Stream added, broadcasting: 5\nI0125 00:34:16.519854    2315 log.go:172] (0xc000b04000) Reply frame received for 5\nI0125 00:34:16.607910    2315 log.go:172] (0xc000b04000) Data frame received for 3\nI0125 00:34:16.608207    2315 log.go:172] (0xc00091df40) (3) Data frame handling\nI0125 00:34:16.608266    2315 log.go:172] (0xc00091df40) (3) Data frame sent\nI0125 00:34:16.610039    2315 log.go:172] (0xc000b04000) Data frame received for 5\nI0125 00:34:16.610156    2315 log.go:172] (0xc0007ee000) (5) Data frame handling\nI0125 00:34:16.610206    2315 log.go:172] (0xc0007ee000) (5) Data frame sent\nI0125 00:34:16.610214    2315 log.go:172] (0xc000b04000) Data frame received for 5\nI0125 00:34:16.610297    2315 log.go:172] (0xc0007ee000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0125 00:34:16.610665    2315 log.go:172] (0xc0007ee000) (5) Data frame sent\nI0125 00:34:16.711511    2315 log.go:172] (0xc000b04000) Data frame received for 1\nI0125 00:34:16.711906    2315 log.go:172] (0xc00076e000) (1) Data frame handling\nI0125 00:34:16.712019    2315 log.go:172] (0xc00076e000) (1) Data frame sent\nI0125 00:34:16.712723    2315 log.go:172] (0xc000b04000) (0xc00091df40) Stream removed, broadcasting: 3\nI0125 00:34:16.713284    2315 log.go:172] (0xc000b04000) (0xc0007ee000) Stream removed, broadcasting: 5\nI0125 00:34:16.713410    2315 log.go:172] (0xc000b04000) (0xc00076e000) Stream removed, broadcasting: 1\nI0125 00:34:16.713487    2315 log.go:172] (0xc000b04000) Go away received\nI0125 00:34:16.714987    2315 log.go:172] (0xc000b04000) (0xc00076e000) Stream removed, broadcasting: 1\nI0125 00:34:16.715002    2315 log.go:172] (0xc000b04000) (0xc00091df40) Stream removed, broadcasting: 3\nI0125 00:34:16.715007    2315 log.go:172] (0xc000b04000) (0xc0007ee000) Stream removed, broadcasting: 5\n"
Jan 25 00:34:16.727: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:34:16.727: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:34:16.734: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jan 25 00:34:26.742: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:34:26.742: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:34:26.742: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 25 00:34:26.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:34:27.138: INFO: stderr: "I0125 00:34:26.967956    2335 log.go:172] (0xc000954000) (0xc000930000) Create stream\nI0125 00:34:26.968181    2335 log.go:172] (0xc000954000) (0xc000930000) Stream added, broadcasting: 1\nI0125 00:34:26.978385    2335 log.go:172] (0xc000954000) Reply frame received for 1\nI0125 00:34:26.978479    2335 log.go:172] (0xc000954000) (0xc00082e000) Create stream\nI0125 00:34:26.978504    2335 log.go:172] (0xc000954000) (0xc00082e000) Stream added, broadcasting: 3\nI0125 00:34:26.980372    2335 log.go:172] (0xc000954000) Reply frame received for 3\nI0125 00:34:26.980394    2335 log.go:172] (0xc000954000) (0xc000a7a140) Create stream\nI0125 00:34:26.980400    2335 log.go:172] (0xc000954000) (0xc000a7a140) Stream added, broadcasting: 5\nI0125 00:34:26.981628    2335 log.go:172] (0xc000954000) Reply frame received for 5\nI0125 00:34:27.039365    2335 log.go:172] (0xc000954000) Data frame received for 5\nI0125 00:34:27.039424    2335 log.go:172] (0xc000a7a140) (5) Data frame handling\nI0125 00:34:27.039442    2335 log.go:172] (0xc000a7a140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:34:27.039460    2335 log.go:172] (0xc000954000) Data frame received for 3\nI0125 00:34:27.039470    2335 log.go:172] (0xc00082e000) (3) Data frame handling\nI0125 00:34:27.039480    2335 log.go:172] (0xc00082e000) (3) Data frame sent\nI0125 00:34:27.123657    2335 log.go:172] (0xc000954000) Data frame received for 1\nI0125 00:34:27.123748    2335 log.go:172] (0xc000930000) (1) Data frame handling\nI0125 00:34:27.123777    2335 log.go:172] (0xc000930000) (1) Data frame sent\nI0125 00:34:27.123801    2335 log.go:172] (0xc000954000) (0xc000930000) Stream removed, broadcasting: 1\nI0125 00:34:27.125611    2335 log.go:172] (0xc000954000) (0xc00082e000) Stream removed, broadcasting: 3\nI0125 00:34:27.126351    2335 log.go:172] (0xc000954000) (0xc000a7a140) Stream removed, broadcasting: 5\nI0125 00:34:27.126639    2335 log.go:172] (0xc000954000) Go away received\nI0125 00:34:27.128222    2335 log.go:172] (0xc000954000) (0xc000930000) Stream removed, broadcasting: 1\nI0125 00:34:27.128431    2335 log.go:172] (0xc000954000) (0xc00082e000) Stream removed, broadcasting: 3\nI0125 00:34:27.128515    2335 log.go:172] (0xc000954000) (0xc000a7a140) Stream removed, broadcasting: 5\n"
Jan 25 00:34:27.139: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:34:27.139: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:34:27.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:34:27.748: INFO: stderr: "I0125 00:34:27.306927    2354 log.go:172] (0xc0009c9810) (0xc000ae8280) Create stream\nI0125 00:34:27.307203    2354 log.go:172] (0xc0009c9810) (0xc000ae8280) Stream added, broadcasting: 1\nI0125 00:34:27.311990    2354 log.go:172] (0xc0009c9810) Reply frame received for 1\nI0125 00:34:27.312049    2354 log.go:172] (0xc0009c9810) (0xc000aae0a0) Create stream\nI0125 00:34:27.312056    2354 log.go:172] (0xc0009c9810) (0xc000aae0a0) Stream added, broadcasting: 3\nI0125 00:34:27.313005    2354 log.go:172] (0xc0009c9810) Reply frame received for 3\nI0125 00:34:27.313081    2354 log.go:172] (0xc0009c9810) (0xc000aae140) Create stream\nI0125 00:34:27.313094    2354 log.go:172] (0xc0009c9810) (0xc000aae140) Stream added, broadcasting: 5\nI0125 00:34:27.315369    2354 log.go:172] (0xc0009c9810) Reply frame received for 5\nI0125 00:34:27.379760    2354 log.go:172] (0xc0009c9810) Data frame received for 5\nI0125 00:34:27.379950    2354 log.go:172] (0xc000aae140) (5) Data frame handling\nI0125 00:34:27.380002    2354 log.go:172] (0xc000aae140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:34:27.599077    2354 log.go:172] (0xc0009c9810) Data frame received for 3\nI0125 00:34:27.599158    2354 log.go:172] (0xc000aae0a0) (3) Data frame handling\nI0125 00:34:27.599187    2354 log.go:172] (0xc000aae0a0) (3) Data frame sent\nI0125 00:34:27.726798    2354 log.go:172] (0xc0009c9810) Data frame received for 1\nI0125 00:34:27.727048    2354 log.go:172] (0xc0009c9810) (0xc000aae140) Stream removed, broadcasting: 5\nI0125 00:34:27.727128    2354 log.go:172] (0xc0009c9810) (0xc000aae0a0) Stream removed, broadcasting: 3\nI0125 00:34:27.727282    2354 log.go:172] (0xc000ae8280) (1) Data frame handling\nI0125 00:34:27.727463    2354 log.go:172] (0xc000ae8280) (1) Data frame sent\nI0125 00:34:27.727675    2354 log.go:172] (0xc0009c9810) (0xc000ae8280) Stream removed, broadcasting: 1\nI0125 00:34:27.727770    2354 log.go:172] (0xc0009c9810) Go away received\nI0125 00:34:27.730101    2354 log.go:172] (0xc0009c9810) (0xc000ae8280) Stream removed, broadcasting: 1\nI0125 00:34:27.730640    2354 log.go:172] (0xc0009c9810) (0xc000aae0a0) Stream removed, broadcasting: 3\nI0125 00:34:27.730703    2354 log.go:172] (0xc0009c9810) (0xc000aae140) Stream removed, broadcasting: 5\n"
Jan 25 00:34:27.748: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:34:27.749: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:34:27.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:34:28.132: INFO: stderr: "I0125 00:34:27.926649    2373 log.go:172] (0xc00096a6e0) (0xc0009ea1e0) Create stream\nI0125 00:34:27.927302    2373 log.go:172] (0xc00096a6e0) (0xc0009ea1e0) Stream added, broadcasting: 1\nI0125 00:34:27.935880    2373 log.go:172] (0xc00096a6e0) Reply frame received for 1\nI0125 00:34:27.935950    2373 log.go:172] (0xc00096a6e0) (0xc000510820) Create stream\nI0125 00:34:27.935961    2373 log.go:172] (0xc00096a6e0) (0xc000510820) Stream added, broadcasting: 3\nI0125 00:34:27.937384    2373 log.go:172] (0xc00096a6e0) Reply frame received for 3\nI0125 00:34:27.937417    2373 log.go:172] (0xc00096a6e0) (0xc0007454a0) Create stream\nI0125 00:34:27.937428    2373 log.go:172] (0xc00096a6e0) (0xc0007454a0) Stream added, broadcasting: 5\nI0125 00:34:27.939928    2373 log.go:172] (0xc00096a6e0) Reply frame received for 5\nI0125 00:34:28.005242    2373 log.go:172] (0xc00096a6e0) Data frame received for 5\nI0125 00:34:28.005561    2373 log.go:172] (0xc0007454a0) (5) Data frame handling\nI0125 00:34:28.005662    2373 log.go:172] (0xc0007454a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:34:28.056593    2373 log.go:172] (0xc00096a6e0) Data frame received for 3\nI0125 00:34:28.056634    2373 log.go:172] (0xc000510820) (3) Data frame handling\nI0125 00:34:28.056646    2373 log.go:172] (0xc000510820) (3) Data frame sent\nI0125 00:34:28.119830    2373 log.go:172] (0xc00096a6e0) (0xc000510820) Stream removed, broadcasting: 3\nI0125 00:34:28.120123    2373 log.go:172] (0xc00096a6e0) Data frame received for 1\nI0125 00:34:28.120198    2373 log.go:172] (0xc0009ea1e0) (1) Data frame handling\nI0125 00:34:28.120256    2373 log.go:172] (0xc00096a6e0) (0xc0007454a0) Stream removed, broadcasting: 5\nI0125 00:34:28.120329    2373 log.go:172] (0xc0009ea1e0) (1) Data frame sent\nI0125 00:34:28.120376    2373 log.go:172] (0xc00096a6e0) (0xc0009ea1e0) Stream removed, broadcasting: 1\nI0125 00:34:28.120415    2373 log.go:172] (0xc00096a6e0) Go away received\nI0125 00:34:28.121550    2373 log.go:172] (0xc00096a6e0) (0xc0009ea1e0) Stream removed, broadcasting: 1\nI0125 00:34:28.121614    2373 log.go:172] (0xc00096a6e0) (0xc000510820) Stream removed, broadcasting: 3\nI0125 00:34:28.121646    2373 log.go:172] (0xc00096a6e0) (0xc0007454a0) Stream removed, broadcasting: 5\n"
Jan 25 00:34:28.132: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:34:28.132: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:34:28.132: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:34:28.137: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 00:34:38.213: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:34:38.213: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:34:38.213: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:34:38.266: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:38.266: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:38.266: INFO: ss-1  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:38.266: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:38.266: INFO: 
Jan 25 00:34:38.266: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:41.089: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:41.089: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:41.090: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:41.090: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:41.090: INFO: 
Jan 25 00:34:41.090: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:42.104: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:42.104: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:42.104: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:42.104: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:42.104: INFO: 
Jan 25 00:34:42.104: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:43.113: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:43.113: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:43.113: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:43.113: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:43.113: INFO: 
Jan 25 00:34:43.113: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:44.120: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:44.120: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:44.120: INFO: ss-1  jerma-server-mvvl6gufaqub  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:44.120: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:44.120: INFO: 
Jan 25 00:34:44.120: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:45.158: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:45.158: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:45.158: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:45.158: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:45.158: INFO: 
Jan 25 00:34:45.158: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:46.166: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:46.166: INFO: ss-0  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:33:45 +0000 UTC  }]
Jan 25 00:34:46.167: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:46.167: INFO: ss-2  jerma-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:46.167: INFO: 
Jan 25 00:34:46.167: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 25 00:34:47.173: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:47.173: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:47.173: INFO: 
Jan 25 00:34:47.173: INFO: StatefulSet ss has not reached scale 0, at 1
Jan 25 00:34:48.180: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 25 00:34:48.180: INFO: ss-1  jerma-server-mvvl6gufaqub  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-25 00:34:05 +0000 UTC  }]
Jan 25 00:34:48.180: INFO: 
Jan 25 00:34:48.180: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6736
Jan 25 00:34:49.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:34:49.399: INFO: rc: 1
Jan 25 00:34:49.399: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jan 25 00:34:59.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:34:59.559: INFO: rc: 1
Jan 25 00:34:59.560: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:35:09.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:35:09.758: INFO: rc: 1
Jan 25 00:35:09.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:35:19.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:35:19.964: INFO: rc: 1
Jan 25 00:35:19.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:35:29.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:35:30.156: INFO: rc: 1
Jan 25 00:35:30.156: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:35:40.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:35:40.349: INFO: rc: 1
Jan 25 00:35:40.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:35:50.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:35:50.576: INFO: rc: 1
Jan 25 00:35:50.577: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:00.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:00.747: INFO: rc: 1
Jan 25 00:36:00.747: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:10.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:10.932: INFO: rc: 1
Jan 25 00:36:10.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:20.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:21.081: INFO: rc: 1
Jan 25 00:36:21.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:31.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:31.275: INFO: rc: 1
Jan 25 00:36:31.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:41.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:41.438: INFO: rc: 1
Jan 25 00:36:41.438: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:36:51.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:36:51.599: INFO: rc: 1
Jan 25 00:36:51.599: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:01.784: INFO: rc: 1
Jan 25 00:37:01.784: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:11.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:11.950: INFO: rc: 1
Jan 25 00:37:11.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:21.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:22.171: INFO: rc: 1
Jan 25 00:37:22.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:32.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:32.356: INFO: rc: 1
Jan 25 00:37:32.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:42.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:42.476: INFO: rc: 1
Jan 25 00:37:42.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:37:52.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:37:52.599: INFO: rc: 1
Jan 25 00:37:52.599: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:02.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:02.716: INFO: rc: 1
Jan 25 00:38:02.716: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:12.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:12.882: INFO: rc: 1
Jan 25 00:38:12.882: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:22.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:22.994: INFO: rc: 1
Jan 25 00:38:22.994: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:32.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:33.194: INFO: rc: 1
Jan 25 00:38:33.194: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:43.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:43.398: INFO: rc: 1
Jan 25 00:38:43.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:38:53.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:38:53.575: INFO: rc: 1
Jan 25 00:38:53.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:03.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:03.846: INFO: rc: 1
Jan 25 00:39:03.846: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:13.998: INFO: rc: 1
Jan 25 00:39:13.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:23.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:24.139: INFO: rc: 1
Jan 25 00:39:24.139: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:34.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:34.281: INFO: rc: 1
Jan 25 00:39:34.281: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:44.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:44.489: INFO: rc: 1
Jan 25 00:39:44.490: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Jan 25 00:39:54.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6736 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:39:54.666: INFO: rc: 1
Jan 25 00:39:54.666: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: 
Jan 25 00:39:54.666: INFO: Scaling statefulset ss to 0
Jan 25 00:39:54.714: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 00:39:54.717: INFO: Deleting all statefulset in ns statefulset-6736
Jan 25 00:39:54.719: INFO: Scaling statefulset ss to 0
Jan 25 00:39:54.727: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:39:54.729: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:39:54.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6736" for this suite.

• [SLOW TEST:370.016 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":154,"skipped":2509,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:39:54.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-d33e7e27-9fe6-420a-ab60-0de72cee292f
STEP: Creating a pod to test consume secrets
Jan 25 00:39:54.894: INFO: Waiting up to 5m0s for pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214" in namespace "secrets-3652" to be "success or failure"
Jan 25 00:39:54.900: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214": Phase="Pending", Reason="", readiness=false. Elapsed: 5.387968ms
Jan 25 00:39:56.905: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010935275s
Jan 25 00:39:58.915: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020096764s
Jan 25 00:40:00.924: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029660048s
Jan 25 00:40:02.931: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036925222s
STEP: Saw pod success
Jan 25 00:40:02.931: INFO: Pod "pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214" satisfied condition "success or failure"
Jan 25 00:40:02.937: INFO: Trying to get logs from node jerma-node pod pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214 container secret-volume-test: 
STEP: delete the pod
Jan 25 00:40:03.005: INFO: Waiting for pod pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214 to disappear
Jan 25 00:40:03.075: INFO: Pod pod-secrets-dd62af98-e638-4261-9c76-2b48dc8fe214 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:40:03.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3652" for this suite.

• [SLOW TEST:8.314 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2526,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:40:03.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-6945eda1-e2e3-48ac-ba7e-fdea6299e5ba
STEP: Creating secret with name secret-projected-all-test-volume-269d4581-36ec-41b0-9759-311849a1b38d
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 25 00:40:03.299: INFO: Waiting up to 5m0s for pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395" in namespace "projected-9754" to be "success or failure"
Jan 25 00:40:03.357: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395": Phase="Pending", Reason="", readiness=false. Elapsed: 57.167809ms
Jan 25 00:40:05.368: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069054055s
Jan 25 00:40:07.374: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074316557s
Jan 25 00:40:09.382: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0822956s
Jan 25 00:40:11.390: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090342008s
STEP: Saw pod success
Jan 25 00:40:11.390: INFO: Pod "projected-volume-114071f6-7284-4fc1-a281-aa535850f395" satisfied condition "success or failure"
Jan 25 00:40:11.393: INFO: Trying to get logs from node jerma-node pod projected-volume-114071f6-7284-4fc1-a281-aa535850f395 container projected-all-volume-test: 
STEP: delete the pod
Jan 25 00:40:11.634: INFO: Waiting for pod projected-volume-114071f6-7284-4fc1-a281-aa535850f395 to disappear
Jan 25 00:40:11.642: INFO: Pod projected-volume-114071f6-7284-4fc1-a281-aa535850f395 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:40:11.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9754" for this suite.

• [SLOW TEST:8.583 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:40:11.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-3346
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3346
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3346
Jan 25 00:40:11.863: INFO: Found 0 stateful pods, waiting for 1
Jan 25 00:40:21.876: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 25 00:40:21.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:40:22.439: INFO: stderr: "I0125 00:40:22.165171    2998 log.go:172] (0xc000ae5e40) (0xc000a26000) Create stream\nI0125 00:40:22.165760    2998 log.go:172] (0xc000ae5e40) (0xc000a26000) Stream added, broadcasting: 1\nI0125 00:40:22.173926    2998 log.go:172] (0xc000ae5e40) Reply frame received for 1\nI0125 00:40:22.174181    2998 log.go:172] (0xc000ae5e40) (0xc000adc000) Create stream\nI0125 00:40:22.174216    2998 log.go:172] (0xc000ae5e40) (0xc000adc000) Stream added, broadcasting: 3\nI0125 00:40:22.177208    2998 log.go:172] (0xc000ae5e40) Reply frame received for 3\nI0125 00:40:22.177283    2998 log.go:172] (0xc000ae5e40) (0xc00092a000) Create stream\nI0125 00:40:22.177342    2998 log.go:172] (0xc000ae5e40) (0xc00092a000) Stream added, broadcasting: 5\nI0125 00:40:22.178827    2998 log.go:172] (0xc000ae5e40) Reply frame received for 5\nI0125 00:40:22.289572    2998 log.go:172] (0xc000ae5e40) Data frame received for 5\nI0125 00:40:22.289695    2998 log.go:172] (0xc00092a000) (5) Data frame handling\nI0125 00:40:22.289725    2998 log.go:172] (0xc00092a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:40:22.315747    2998 log.go:172] (0xc000ae5e40) Data frame received for 3\nI0125 00:40:22.315878    2998 log.go:172] (0xc000adc000) (3) Data frame handling\nI0125 00:40:22.315928    2998 log.go:172] (0xc000adc000) (3) Data frame sent\nI0125 00:40:22.415990    2998 log.go:172] (0xc000ae5e40) (0xc000adc000) Stream removed, broadcasting: 3\nI0125 00:40:22.416369    2998 log.go:172] (0xc000ae5e40) Data frame received for 1\nI0125 00:40:22.416440    2998 log.go:172] (0xc000a26000) (1) Data frame handling\nI0125 00:40:22.416534    2998 log.go:172] (0xc000a26000) (1) Data frame sent\nI0125 00:40:22.416722    2998 log.go:172] (0xc000ae5e40) (0xc000a26000) Stream removed, broadcasting: 1\nI0125 00:40:22.418484    2998 log.go:172] (0xc000ae5e40) (0xc00092a000) Stream removed, broadcasting: 5\nI0125 00:40:22.418608    2998 log.go:172] (0xc000ae5e40) (0xc000a26000) Stream removed, broadcasting: 1\nI0125 00:40:22.418638    2998 log.go:172] (0xc000ae5e40) (0xc000adc000) Stream removed, broadcasting: 3\nI0125 00:40:22.418662    2998 log.go:172] (0xc000ae5e40) (0xc00092a000) Stream removed, broadcasting: 5\nI0125 00:40:22.418953    2998 log.go:172] (0xc000ae5e40) Go away received\n"
Jan 25 00:40:22.439: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:40:22.439: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:40:22.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 25 00:40:32.454: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:40:32.454: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:40:32.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999514s
Jan 25 00:40:33.488: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991278258s
Jan 25 00:40:34.496: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985346094s
Jan 25 00:40:35.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977489142s
Jan 25 00:40:36.527: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955985309s
Jan 25 00:40:37.571: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.945888224s
Jan 25 00:40:38.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.90177722s
Jan 25 00:40:39.608: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.878922985s
Jan 25 00:40:40.616: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.865452753s
Jan 25 00:40:41.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 856.93729ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3346
Jan 25 00:40:42.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:40:43.054: INFO: stderr: "I0125 00:40:42.837279    3018 log.go:172] (0xc000104370) (0xc000681c20) Create stream\nI0125 00:40:42.837934    3018 log.go:172] (0xc000104370) (0xc000681c20) Stream added, broadcasting: 1\nI0125 00:40:42.843584    3018 log.go:172] (0xc000104370) Reply frame received for 1\nI0125 00:40:42.843717    3018 log.go:172] (0xc000104370) (0xc0005c8820) Create stream\nI0125 00:40:42.843742    3018 log.go:172] (0xc000104370) (0xc0005c8820) Stream added, broadcasting: 3\nI0125 00:40:42.845610    3018 log.go:172] (0xc000104370) Reply frame received for 3\nI0125 00:40:42.845637    3018 log.go:172] (0xc000104370) (0xc000681cc0) Create stream\nI0125 00:40:42.845645    3018 log.go:172] (0xc000104370) (0xc000681cc0) Stream added, broadcasting: 5\nI0125 00:40:42.847542    3018 log.go:172] (0xc000104370) Reply frame received for 5\nI0125 00:40:42.954343    3018 log.go:172] (0xc000104370) Data frame received for 5\nI0125 00:40:42.954673    3018 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0125 00:40:42.954749    3018 log.go:172] (0xc000681cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:40:42.955824    3018 log.go:172] (0xc000104370) Data frame received for 3\nI0125 00:40:42.955846    3018 log.go:172] (0xc0005c8820) (3) Data frame handling\nI0125 00:40:42.955865    3018 log.go:172] (0xc0005c8820) (3) Data frame sent\nI0125 00:40:43.043310    3018 log.go:172] (0xc000104370) Data frame received for 1\nI0125 00:40:43.043546    3018 log.go:172] (0xc000104370) (0xc0005c8820) Stream removed, broadcasting: 3\nI0125 00:40:43.043612    3018 log.go:172] (0xc000681c20) (1) Data frame handling\nI0125 00:40:43.043635    3018 log.go:172] (0xc000681c20) (1) Data frame sent\nI0125 00:40:43.043668    3018 log.go:172] (0xc000104370) (0xc000681cc0) Stream removed, broadcasting: 5\nI0125 00:40:43.043704    3018 log.go:172] (0xc000104370) (0xc000681c20) Stream removed, broadcasting: 1\nI0125 00:40:43.043735    3018 log.go:172] (0xc000104370) Go away received\nI0125 00:40:43.046293    3018 log.go:172] (0xc000104370) (0xc000681c20) Stream removed, broadcasting: 1\nI0125 00:40:43.046465    3018 log.go:172] (0xc000104370) (0xc0005c8820) Stream removed, broadcasting: 3\nI0125 00:40:43.046480    3018 log.go:172] (0xc000104370) (0xc000681cc0) Stream removed, broadcasting: 5\n"
Jan 25 00:40:43.055: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:40:43.055: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:40:43.092: INFO: Found 2 stateful pods, waiting for 3
Jan 25 00:40:53.098: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:40:53.099: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:40:53.099: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:41:03.099: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:41:03.099: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:41:03.099: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 25 00:41:03.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:41:03.525: INFO: stderr: "I0125 00:41:03.348842    3037 log.go:172] (0xc000b10d10) (0xc000afe320) Create stream\nI0125 00:41:03.349138    3037 log.go:172] (0xc000b10d10) (0xc000afe320) Stream added, broadcasting: 1\nI0125 00:41:03.352825    3037 log.go:172] (0xc000b10d10) Reply frame received for 1\nI0125 00:41:03.352878    3037 log.go:172] (0xc000b10d10) (0xc000aba140) Create stream\nI0125 00:41:03.352886    3037 log.go:172] (0xc000b10d10) (0xc000aba140) Stream added, broadcasting: 3\nI0125 00:41:03.354150    3037 log.go:172] (0xc000b10d10) Reply frame received for 3\nI0125 00:41:03.354182    3037 log.go:172] (0xc000b10d10) (0xc000aba1e0) Create stream\nI0125 00:41:03.354193    3037 log.go:172] (0xc000b10d10) (0xc000aba1e0) Stream added, broadcasting: 5\nI0125 00:41:03.356168    3037 log.go:172] (0xc000b10d10) Reply frame received for 5\nI0125 00:41:03.438177    3037 log.go:172] (0xc000b10d10) Data frame received for 3\nI0125 00:41:03.438526    3037 log.go:172] (0xc000aba140) (3) Data frame handling\nI0125 00:41:03.438591    3037 log.go:172] (0xc000aba140) (3) Data frame sent\nI0125 00:41:03.438664    3037 log.go:172] (0xc000b10d10) Data frame received for 5\nI0125 00:41:03.438680    3037 log.go:172] (0xc000aba1e0) (5) Data frame handling\nI0125 00:41:03.438702    3037 log.go:172] (0xc000aba1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:41:03.515521    3037 log.go:172] (0xc000b10d10) Data frame received for 1\nI0125 00:41:03.515596    3037 log.go:172] (0xc000b10d10) (0xc000aba140) Stream removed, broadcasting: 3\nI0125 00:41:03.515694    3037 log.go:172] (0xc000afe320) (1) Data frame handling\nI0125 00:41:03.515715    3037 log.go:172] (0xc000b10d10) (0xc000aba1e0) Stream removed, broadcasting: 5\nI0125 00:41:03.515757    3037 log.go:172] (0xc000afe320) (1) Data frame sent\nI0125 00:41:03.515779    3037 log.go:172] (0xc000b10d10) (0xc000afe320) Stream removed, broadcasting: 1\nI0125 00:41:03.515808    3037 log.go:172] (0xc000b10d10) Go away received\nI0125 00:41:03.516521    3037 log.go:172] (0xc000b10d10) (0xc000afe320) Stream removed, broadcasting: 1\nI0125 00:41:03.516535    3037 log.go:172] (0xc000b10d10) (0xc000aba140) Stream removed, broadcasting: 3\nI0125 00:41:03.516539    3037 log.go:172] (0xc000b10d10) (0xc000aba1e0) Stream removed, broadcasting: 5\n"
Jan 25 00:41:03.525: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:41:03.525: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:41:03.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:41:04.093: INFO: stderr: "I0125 00:41:03.709688    3057 log.go:172] (0xc0009d7550) (0xc000b7a1e0) Create stream\nI0125 00:41:03.709963    3057 log.go:172] (0xc0009d7550) (0xc000b7a1e0) Stream added, broadcasting: 1\nI0125 00:41:03.713035    3057 log.go:172] (0xc0009d7550) Reply frame received for 1\nI0125 00:41:03.713086    3057 log.go:172] (0xc0009d7550) (0xc000ba60a0) Create stream\nI0125 00:41:03.713102    3057 log.go:172] (0xc0009d7550) (0xc000ba60a0) Stream added, broadcasting: 3\nI0125 00:41:03.713924    3057 log.go:172] (0xc0009d7550) Reply frame received for 3\nI0125 00:41:03.713949    3057 log.go:172] (0xc0009d7550) (0xc0009ca320) Create stream\nI0125 00:41:03.713959    3057 log.go:172] (0xc0009d7550) (0xc0009ca320) Stream added, broadcasting: 5\nI0125 00:41:03.714725    3057 log.go:172] (0xc0009d7550) Reply frame received for 5\nI0125 00:41:03.816024    3057 log.go:172] (0xc0009d7550) Data frame received for 5\nI0125 00:41:03.816238    3057 log.go:172] (0xc0009ca320) (5) Data frame handling\nI0125 00:41:03.816308    3057 log.go:172] (0xc0009ca320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:41:03.909169    3057 log.go:172] (0xc0009d7550) Data frame received for 3\nI0125 00:41:03.909461    3057 log.go:172] (0xc000ba60a0) (3) Data frame handling\nI0125 00:41:03.909548    3057 log.go:172] (0xc000ba60a0) (3) Data frame sent\nI0125 00:41:04.078420    3057 log.go:172] (0xc0009d7550) (0xc000ba60a0) Stream removed, broadcasting: 3\nI0125 00:41:04.078639    3057 log.go:172] (0xc0009d7550) Data frame received for 1\nI0125 00:41:04.078662    3057 log.go:172] (0xc000b7a1e0) (1) Data frame handling\nI0125 00:41:04.078692    3057 log.go:172] (0xc000b7a1e0) (1) Data frame sent\nI0125 00:41:04.078781    3057 log.go:172] (0xc0009d7550) (0xc000b7a1e0) Stream removed, broadcasting: 1\nI0125 00:41:04.078879    3057 log.go:172] (0xc0009d7550) (0xc0009ca320) Stream removed, broadcasting: 5\nI0125 00:41:04.078914    3057 log.go:172] (0xc0009d7550) Go away received\nI0125 00:41:04.080320    3057 log.go:172] (0xc0009d7550) (0xc000b7a1e0) Stream removed, broadcasting: 1\nI0125 00:41:04.080342    3057 log.go:172] (0xc0009d7550) (0xc000ba60a0) Stream removed, broadcasting: 3\nI0125 00:41:04.080349    3057 log.go:172] (0xc0009d7550) (0xc0009ca320) Stream removed, broadcasting: 5\n"
Jan 25 00:41:04.093: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:41:04.093: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:41:04.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 25 00:41:04.561: INFO: stderr: "I0125 00:41:04.330861    3076 log.go:172] (0xc00097ad10) (0xc000afc1e0) Create stream\nI0125 00:41:04.331290    3076 log.go:172] (0xc00097ad10) (0xc000afc1e0) Stream added, broadcasting: 1\nI0125 00:41:04.335350    3076 log.go:172] (0xc00097ad10) Reply frame received for 1\nI0125 00:41:04.335434    3076 log.go:172] (0xc00097ad10) (0xc000afc280) Create stream\nI0125 00:41:04.335485    3076 log.go:172] (0xc00097ad10) (0xc000afc280) Stream added, broadcasting: 3\nI0125 00:41:04.337416    3076 log.go:172] (0xc00097ad10) Reply frame received for 3\nI0125 00:41:04.337447    3076 log.go:172] (0xc00097ad10) (0xc00084c000) Create stream\nI0125 00:41:04.337458    3076 log.go:172] (0xc00097ad10) (0xc00084c000) Stream added, broadcasting: 5\nI0125 00:41:04.338888    3076 log.go:172] (0xc00097ad10) Reply frame received for 5\nI0125 00:41:04.422485    3076 log.go:172] (0xc00097ad10) Data frame received for 5\nI0125 00:41:04.422591    3076 log.go:172] (0xc00084c000) (5) Data frame handling\nI0125 00:41:04.422625    3076 log.go:172] (0xc00084c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0125 00:41:04.453899    3076 log.go:172] (0xc00097ad10) Data frame received for 3\nI0125 00:41:04.454007    3076 log.go:172] (0xc000afc280) (3) Data frame handling\nI0125 00:41:04.454049    3076 log.go:172] (0xc000afc280) (3) Data frame sent\nI0125 00:41:04.546607    3076 log.go:172] (0xc00097ad10) (0xc000afc280) Stream removed, broadcasting: 3\nI0125 00:41:04.546754    3076 log.go:172] (0xc00097ad10) Data frame received for 1\nI0125 00:41:04.546789    3076 log.go:172] (0xc000afc1e0) (1) Data frame handling\nI0125 00:41:04.546824    3076 log.go:172] (0xc000afc1e0) (1) Data frame sent\nI0125 00:41:04.546839    3076 log.go:172] (0xc00097ad10) (0xc000afc1e0) Stream removed, broadcasting: 1\nI0125 00:41:04.546880    3076 log.go:172] (0xc00097ad10) (0xc00084c000) Stream removed, broadcasting: 5\nI0125 00:41:04.546915    3076 log.go:172] (0xc00097ad10) Go away received\nI0125 00:41:04.548483    3076 log.go:172] (0xc00097ad10) (0xc000afc1e0) Stream removed, broadcasting: 1\nI0125 00:41:04.548546    3076 log.go:172] (0xc00097ad10) (0xc000afc280) Stream removed, broadcasting: 3\nI0125 00:41:04.548559    3076 log.go:172] (0xc00097ad10) (0xc00084c000) Stream removed, broadcasting: 5\n"
Jan 25 00:41:04.562: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 25 00:41:04.562: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 25 00:41:04.562: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:41:04.575: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 25 00:41:14.589: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:41:14.589: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:41:14.589: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 25 00:41:14.617: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999656s
Jan 25 00:41:15.625: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987544383s
Jan 25 00:41:16.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979513817s
Jan 25 00:41:17.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.96876951s
Jan 25 00:41:18.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962843189s
Jan 25 00:41:19.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.715804042s
Jan 25 00:41:20.904: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.70892952s
Jan 25 00:41:21.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.700227493s
Jan 25 00:41:22.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.690799957s
Jan 25 00:41:23.945: INFO: Verifying statefulset ss doesn't scale past 3 for another 667.827077ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3346
Jan 25 00:41:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:41:25.337: INFO: stderr: "I0125 00:41:25.167510    3093 log.go:172] (0xc000a04840) (0xc000902280) Create stream\nI0125 00:41:25.167784    3093 log.go:172] (0xc000a04840) (0xc000902280) Stream added, broadcasting: 1\nI0125 00:41:25.171631    3093 log.go:172] (0xc000a04840) Reply frame received for 1\nI0125 00:41:25.171692    3093 log.go:172] (0xc000a04840) (0xc0005e1b80) Create stream\nI0125 00:41:25.171710    3093 log.go:172] (0xc000a04840) (0xc0005e1b80) Stream added, broadcasting: 3\nI0125 00:41:25.173395    3093 log.go:172] (0xc000a04840) Reply frame received for 3\nI0125 00:41:25.173497    3093 log.go:172] (0xc000a04840) (0xc000902320) Create stream\nI0125 00:41:25.173510    3093 log.go:172] (0xc000a04840) (0xc000902320) Stream added, broadcasting: 5\nI0125 00:41:25.174629    3093 log.go:172] (0xc000a04840) Reply frame received for 5\nI0125 00:41:25.242639    3093 log.go:172] (0xc000a04840) Data frame received for 3\nI0125 00:41:25.242717    3093 log.go:172] (0xc0005e1b80) (3) Data frame handling\nI0125 00:41:25.242746    3093 log.go:172] (0xc0005e1b80) (3) Data frame sent\nI0125 00:41:25.242810    3093 log.go:172] (0xc000a04840) Data frame received for 5\nI0125 00:41:25.242831    3093 log.go:172] (0xc000902320) (5) Data frame handling\nI0125 00:41:25.242869    3093 log.go:172] (0xc000902320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:41:25.328816    3093 log.go:172] (0xc000a04840) Data frame received for 1\nI0125 00:41:25.329007    3093 log.go:172] (0xc000a04840) (0xc0005e1b80) Stream removed, broadcasting: 3\nI0125 00:41:25.329188    3093 log.go:172] (0xc000902280) (1) Data frame handling\nI0125 00:41:25.329342    3093 log.go:172] (0xc000902280) (1) Data frame sent\nI0125 00:41:25.329383    3093 log.go:172] (0xc000a04840) (0xc000902320) Stream removed, broadcasting: 5\nI0125 00:41:25.329416    3093 log.go:172] (0xc000a04840) (0xc000902280) Stream removed, broadcasting: 1\nI0125 00:41:25.329737    3093 log.go:172] (0xc000a04840) Go away received\nI0125 00:41:25.330077    3093 log.go:172] (0xc000a04840) (0xc000902280) Stream removed, broadcasting: 1\nI0125 00:41:25.330126    3093 log.go:172] (0xc000a04840) (0xc0005e1b80) Stream removed, broadcasting: 3\nI0125 00:41:25.330168    3093 log.go:172] (0xc000a04840) (0xc000902320) Stream removed, broadcasting: 5\n"
Jan 25 00:41:25.337: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:41:25.337: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:41:25.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:41:25.681: INFO: stderr: "I0125 00:41:25.498725    3114 log.go:172] (0xc000102580) (0xc000a881e0) Create stream\nI0125 00:41:25.499074    3114 log.go:172] (0xc000102580) (0xc000a881e0) Stream added, broadcasting: 1\nI0125 00:41:25.501513    3114 log.go:172] (0xc000102580) Reply frame received for 1\nI0125 00:41:25.501598    3114 log.go:172] (0xc000102580) (0xc000a60000) Create stream\nI0125 00:41:25.501618    3114 log.go:172] (0xc000102580) (0xc000a60000) Stream added, broadcasting: 3\nI0125 00:41:25.502494    3114 log.go:172] (0xc000102580) Reply frame received for 3\nI0125 00:41:25.502511    3114 log.go:172] (0xc000102580) (0xc000a88280) Create stream\nI0125 00:41:25.502516    3114 log.go:172] (0xc000102580) (0xc000a88280) Stream added, broadcasting: 5\nI0125 00:41:25.504325    3114 log.go:172] (0xc000102580) Reply frame received for 5\nI0125 00:41:25.569531    3114 log.go:172] (0xc000102580) Data frame received for 5\nI0125 00:41:25.569653    3114 log.go:172] (0xc000a88280) (5) Data frame handling\nI0125 00:41:25.569668    3114 log.go:172] (0xc000a88280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:41:25.569852    3114 log.go:172] (0xc000102580) Data frame received for 3\nI0125 00:41:25.569918    3114 log.go:172] (0xc000a60000) (3) Data frame handling\nI0125 00:41:25.569948    3114 log.go:172] (0xc000a60000) (3) Data frame sent\nI0125 00:41:25.668046    3114 log.go:172] (0xc000102580) Data frame received for 1\nI0125 00:41:25.668160    3114 log.go:172] (0xc000a881e0) (1) Data frame handling\nI0125 00:41:25.668212    3114 log.go:172] (0xc000a881e0) (1) Data frame sent\nI0125 00:41:25.668411    3114 log.go:172] (0xc000102580) (0xc000a881e0) Stream removed, broadcasting: 1\nI0125 00:41:25.668597    3114 log.go:172] (0xc000102580) (0xc000a60000) Stream removed, broadcasting: 3\nI0125 00:41:25.669426    3114 log.go:172] (0xc000102580) (0xc000a88280) Stream removed, broadcasting: 5\nI0125 00:41:25.669524    3114 log.go:172] (0xc000102580) Go away received\nI0125 00:41:25.670497    3114 log.go:172] (0xc000102580) (0xc000a881e0) Stream removed, broadcasting: 1\nI0125 00:41:25.670521    3114 log.go:172] (0xc000102580) (0xc000a60000) Stream removed, broadcasting: 3\nI0125 00:41:25.670537    3114 log.go:172] (0xc000102580) (0xc000a88280) Stream removed, broadcasting: 5\n"
Jan 25 00:41:25.681: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:41:25.681: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:41:25.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3346 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 25 00:41:26.122: INFO: stderr: "I0125 00:41:25.942265    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9ea0) Create stream\nI0125 00:41:25.942492    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9ea0) Stream added, broadcasting: 1\nI0125 00:41:25.946006    3134 log.go:172] (0xc000a5c0b0) Reply frame received for 1\nI0125 00:41:25.946051    3134 log.go:172] (0xc000a5c0b0) (0xc00061a820) Create stream\nI0125 00:41:25.946062    3134 log.go:172] (0xc000a5c0b0) (0xc00061a820) Stream added, broadcasting: 3\nI0125 00:41:25.947071    3134 log.go:172] (0xc000a5c0b0) Reply frame received for 3\nI0125 00:41:25.947103    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9f40) Create stream\nI0125 00:41:25.947114    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9f40) Stream added, broadcasting: 5\nI0125 00:41:25.948068    3134 log.go:172] (0xc000a5c0b0) Reply frame received for 5\nI0125 00:41:26.034212    3134 log.go:172] (0xc000a5c0b0) Data frame received for 3\nI0125 00:41:26.034286    3134 log.go:172] (0xc00061a820) (3) Data frame handling\nI0125 00:41:26.034303    3134 log.go:172] (0xc00061a820) (3) Data frame sent\nI0125 00:41:26.034363    3134 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0125 00:41:26.034383    3134 log.go:172] (0xc0006c9f40) (5) Data frame handling\nI0125 00:41:26.034397    3134 log.go:172] (0xc0006c9f40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0125 00:41:26.112561    3134 log.go:172] (0xc000a5c0b0) Data frame received for 1\nI0125 00:41:26.112684    3134 log.go:172] (0xc000a5c0b0) (0xc00061a820) Stream removed, broadcasting: 3\nI0125 00:41:26.112741    3134 log.go:172] (0xc0006c9ea0) (1) Data frame handling\nI0125 00:41:26.112764    3134 log.go:172] (0xc0006c9ea0) (1) Data frame sent\nI0125 00:41:26.112861    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9f40) Stream removed, broadcasting: 5\nI0125 00:41:26.112896    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9ea0) Stream removed, broadcasting: 1\nI0125 00:41:26.112923    3134 log.go:172] (0xc000a5c0b0) Go away received\nI0125 00:41:26.114070    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9ea0) Stream removed, broadcasting: 1\nI0125 00:41:26.114081    3134 log.go:172] (0xc000a5c0b0) (0xc00061a820) Stream removed, broadcasting: 3\nI0125 00:41:26.114085    3134 log.go:172] (0xc000a5c0b0) (0xc0006c9f40) Stream removed, broadcasting: 5\n"
Jan 25 00:41:26.122: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 25 00:41:26.122: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 25 00:41:26.122: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 00:41:46.358: INFO: Deleting all statefulset in ns statefulset-3346
Jan 25 00:41:46.364: INFO: Scaling statefulset ss to 0
Jan 25 00:41:46.378: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:41:46.382: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:41:46.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3346" for this suite.

• [SLOW TEST:94.760 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":157,"skipped":2572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:41:46.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4662, will wait for the garbage collector to delete the pods
Jan 25 00:41:58.700: INFO: Deleting Job.batch foo took: 22.237883ms
Jan 25 00:41:59.100: INFO: Terminating Job.batch foo pods took: 400.538114ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:42:42.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4662" for this suite.

• [SLOW TEST:56.169 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":158,"skipped":2604,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:42:42.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 25 00:42:42.718: INFO: Waiting up to 5m0s for pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f" in namespace "emptydir-5439" to be "success or failure"
Jan 25 00:42:42.735: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.712744ms
Jan 25 00:42:44.741: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023295806s
Jan 25 00:42:46.769: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050559798s
Jan 25 00:42:48.774: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056377702s
Jan 25 00:42:50.783: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065172896s
STEP: Saw pod success
Jan 25 00:42:50.783: INFO: Pod "pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f" satisfied condition "success or failure"
Jan 25 00:42:50.787: INFO: Trying to get logs from node jerma-node pod pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f container test-container: 
STEP: delete the pod
Jan 25 00:42:50.868: INFO: Waiting for pod pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f to disappear
Jan 25 00:42:50.879: INFO: Pod pod-6a3a2a26-3113-4a2d-8cbc-9cf38895c64f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:42:50.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5439" for this suite.

• [SLOW TEST:8.284 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2614,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:42:50.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 25 00:42:51.040: INFO: Created pod &Pod{ObjectMeta:{dns-1121  dns-1121 /api/v1/namespaces/dns-1121/pods/dns-1121 a903403f-597d-41d3-ba1d-c4b0ab1485ea 4131402 0 2020-01-25 00:42:51 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9sxwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9sxwb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9sxwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 25 00:42:59.056: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1121 PodName:dns-1121 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 00:42:59.056: INFO: >>> kubeConfig: /root/.kube/config
I0125 00:42:59.105296       9 log.go:172] (0xc00060a370) (0xc001f930e0) Create stream
I0125 00:42:59.105357       9 log.go:172] (0xc00060a370) (0xc001f930e0) Stream added, broadcasting: 1
I0125 00:42:59.109113       9 log.go:172] (0xc00060a370) Reply frame received for 1
I0125 00:42:59.109213       9 log.go:172] (0xc00060a370) (0xc00295fd60) Create stream
I0125 00:42:59.109226       9 log.go:172] (0xc00060a370) (0xc00295fd60) Stream added, broadcasting: 3
I0125 00:42:59.110990       9 log.go:172] (0xc00060a370) Reply frame received for 3
I0125 00:42:59.111017       9 log.go:172] (0xc00060a370) (0xc001f93180) Create stream
I0125 00:42:59.111031       9 log.go:172] (0xc00060a370) (0xc001f93180) Stream added, broadcasting: 5
I0125 00:42:59.112410       9 log.go:172] (0xc00060a370) Reply frame received for 5
I0125 00:42:59.218723       9 log.go:172] (0xc00060a370) Data frame received for 3
I0125 00:42:59.218763       9 log.go:172] (0xc00295fd60) (3) Data frame handling
I0125 00:42:59.218797       9 log.go:172] (0xc00295fd60) (3) Data frame sent
I0125 00:42:59.329517       9 log.go:172] (0xc00060a370) (0xc00295fd60) Stream removed, broadcasting: 3
I0125 00:42:59.329754       9 log.go:172] (0xc00060a370) Data frame received for 1
I0125 00:42:59.329779       9 log.go:172] (0xc001f930e0) (1) Data frame handling
I0125 00:42:59.329801       9 log.go:172] (0xc001f930e0) (1) Data frame sent
I0125 00:42:59.329846       9 log.go:172] (0xc00060a370) (0xc001f930e0) Stream removed, broadcasting: 1
I0125 00:42:59.330045       9 log.go:172] (0xc00060a370) (0xc001f93180) Stream removed, broadcasting: 5
I0125 00:42:59.330115       9 log.go:172] (0xc00060a370) Go away received
I0125 00:42:59.330437       9 log.go:172] (0xc00060a370) (0xc001f930e0) Stream removed, broadcasting: 1
I0125 00:42:59.330451       9 log.go:172] (0xc00060a370) (0xc00295fd60) Stream removed, broadcasting: 3
I0125 00:42:59.330459       9 log.go:172] (0xc00060a370) (0xc001f93180) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 25 00:42:59.330: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1121 PodName:dns-1121 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 00:42:59.330: INFO: >>> kubeConfig: /root/.kube/config
I0125 00:42:59.379540       9 log.go:172] (0xc002aff810) (0xc001c4a5a0) Create stream
I0125 00:42:59.379589       9 log.go:172] (0xc002aff810) (0xc001c4a5a0) Stream added, broadcasting: 1
I0125 00:42:59.382850       9 log.go:172] (0xc002aff810) Reply frame received for 1
I0125 00:42:59.382886       9 log.go:172] (0xc002aff810) (0xc002128000) Create stream
I0125 00:42:59.382898       9 log.go:172] (0xc002aff810) (0xc002128000) Stream added, broadcasting: 3
I0125 00:42:59.384277       9 log.go:172] (0xc002aff810) Reply frame received for 3
I0125 00:42:59.384305       9 log.go:172] (0xc002aff810) (0xc001c4a640) Create stream
I0125 00:42:59.384315       9 log.go:172] (0xc002aff810) (0xc001c4a640) Stream added, broadcasting: 5
I0125 00:42:59.385706       9 log.go:172] (0xc002aff810) Reply frame received for 5
I0125 00:42:59.489339       9 log.go:172] (0xc002aff810) Data frame received for 3
I0125 00:42:59.489380       9 log.go:172] (0xc002128000) (3) Data frame handling
I0125 00:42:59.489395       9 log.go:172] (0xc002128000) (3) Data frame sent
I0125 00:42:59.548396       9 log.go:172] (0xc002aff810) Data frame received for 1
I0125 00:42:59.548440       9 log.go:172] (0xc001c4a5a0) (1) Data frame handling
I0125 00:42:59.548453       9 log.go:172] (0xc001c4a5a0) (1) Data frame sent
I0125 00:42:59.548470       9 log.go:172] (0xc002aff810) (0xc001c4a5a0) Stream removed, broadcasting: 1
I0125 00:42:59.548854       9 log.go:172] (0xc002aff810) (0xc002128000) Stream removed, broadcasting: 3
I0125 00:42:59.548940       9 log.go:172] (0xc002aff810) (0xc001c4a640) Stream removed, broadcasting: 5
I0125 00:42:59.548983       9 log.go:172] (0xc002aff810) (0xc001c4a5a0) Stream removed, broadcasting: 1
I0125 00:42:59.548996       9 log.go:172] (0xc002aff810) (0xc002128000) Stream removed, broadcasting: 3
I0125 00:42:59.549009       9 log.go:172] (0xc002aff810) (0xc001c4a640) Stream removed, broadcasting: 5
I0125 00:42:59.549093       9 log.go:172] (0xc002aff810) Go away received
Jan 25 00:42:59.549: INFO: Deleting pod dns-1121...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:42:59.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1121" for this suite.

• [SLOW TEST:8.800 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":160,"skipped":2618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:42:59.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Jan 25 00:43:00.510: INFO: created pod pod-service-account-defaultsa
Jan 25 00:43:00.510: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 25 00:43:00.545: INFO: created pod pod-service-account-mountsa
Jan 25 00:43:00.545: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 25 00:43:00.695: INFO: created pod pod-service-account-nomountsa
Jan 25 00:43:00.695: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 25 00:43:00.767: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 25 00:43:00.767: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 25 00:43:00.777: INFO: created pod pod-service-account-mountsa-mountspec
Jan 25 00:43:00.778: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 25 00:43:00.898: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 25 00:43:00.898: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 25 00:43:00.915: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 25 00:43:00.916: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 25 00:43:00.952: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 25 00:43:00.952: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 25 00:43:01.148: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 25 00:43:01.148: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:43:01.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2578" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":161,"skipped":2659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:43:02.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-2340c180-b18f-46e0-bc04-bc4c81e9195f
STEP: Creating a pod to test consume secrets
Jan 25 00:43:05.409: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501" in namespace "projected-7490" to be "success or failure"
Jan 25 00:43:05.754: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 344.798581ms
Jan 25 00:43:07.764: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35559267s
Jan 25 00:43:11.610: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201578548s
Jan 25 00:43:15.675: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265970107s
Jan 25 00:43:17.706: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 12.296929203s
Jan 25 00:43:19.790: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 14.381156701s
Jan 25 00:43:22.923: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 17.513998796s
Jan 25 00:43:24.928: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 19.519449975s
Jan 25 00:43:26.934: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Pending", Reason="", readiness=false. Elapsed: 21.524992362s
Jan 25 00:43:28.941: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.53198974s
STEP: Saw pod success
Jan 25 00:43:28.941: INFO: Pod "pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501" satisfied condition "success or failure"
Jan 25 00:43:28.946: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 00:43:28.987: INFO: Waiting for pod pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501 to disappear
Jan 25 00:43:28.993: INFO: Pod pod-projected-secrets-09fd9bb2-81ba-47a8-a80a-4528bb75f501 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:43:28.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7490" for this suite.

• [SLOW TEST:26.691 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2700,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:43:29.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:43:29.118: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a" in namespace "security-context-test-9143" to be "success or failure"
Jan 25 00:43:29.132: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.989574ms
Jan 25 00:43:31.138: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019731433s
Jan 25 00:43:33.144: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026269702s
Jan 25 00:43:35.153: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03498438s
Jan 25 00:43:37.158: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040420508s
Jan 25 00:43:37.158: INFO: Pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a" satisfied condition "success or failure"
Jan 25 00:43:37.225: INFO: Got logs for pod "busybox-privileged-false-c86d5546-ccc1-40c8-abb5-297f333dfb6a": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:43:37.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9143" for this suite.

• [SLOW TEST:8.240 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2702,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:43:37.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Jan 25 00:43:37.464: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-465" to be "success or failure"
Jan 25 00:43:37.519: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 54.972959ms
Jan 25 00:43:39.526: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062049513s
Jan 25 00:43:41.532: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068443788s
Jan 25 00:43:43.538: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074219046s
Jan 25 00:43:45.544: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079985162s
Jan 25 00:43:47.550: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086105914s
STEP: Saw pod success
Jan 25 00:43:47.550: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 25 00:43:47.554: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 25 00:43:47.825: INFO: Waiting for pod pod-host-path-test to disappear
Jan 25 00:43:47.859: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:43:47.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-465" for this suite.

• [SLOW TEST:10.634 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2730,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:43:47.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 00:43:48.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393" in namespace "downward-api-9313" to be "success or failure"
Jan 25 00:43:48.030: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Pending", Reason="", readiness=false. Elapsed: 11.00078ms
Jan 25 00:43:50.036: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017159035s
Jan 25 00:43:52.044: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024895462s
Jan 25 00:43:54.055: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036567807s
Jan 25 00:43:56.550: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530963363s
Jan 25 00:43:58.616: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.597725535s
STEP: Saw pod success
Jan 25 00:43:58.617: INFO: Pod "downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393" satisfied condition "success or failure"
Jan 25 00:43:58.656: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393 container client-container: 
STEP: delete the pod
Jan 25 00:43:58.693: INFO: Waiting for pod downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393 to disappear
Jan 25 00:43:58.698: INFO: Pod downwardapi-volume-00578a4f-b2db-4c2a-8eba-b9cfcf298393 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:43:58.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9313" for this suite.

• [SLOW TEST:10.825 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2741,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:43:58.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:43:58.813: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36" in namespace "security-context-test-2736" to be "success or failure"
Jan 25 00:43:58.817: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292517ms
Jan 25 00:44:00.823: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010376394s
Jan 25 00:44:02.830: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017723464s
Jan 25 00:44:04.836: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023311108s
Jan 25 00:44:06.842: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029639152s
Jan 25 00:44:06.842: INFO: Pod "alpine-nnp-false-772b92ec-6c13-4455-b0ad-130baa3cef36" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:44:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2736" for this suite.

• [SLOW TEST:8.208 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2778,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:44:06.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Jan 25 00:44:07.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2021 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 25 00:44:18.001: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0125 00:44:16.976606    3154 log.go:172] (0xc000b9e160) (0xc0007d5720) Create stream\nI0125 00:44:16.976839    3154 log.go:172] (0xc000b9e160) (0xc0007d5720) Stream added, broadcasting: 1\nI0125 00:44:16.991048    3154 log.go:172] (0xc000b9e160) Reply frame received for 1\nI0125 00:44:16.991382    3154 log.go:172] (0xc000b9e160) (0xc0007c6280) Create stream\nI0125 00:44:16.991422    3154 log.go:172] (0xc000b9e160) (0xc0007c6280) Stream added, broadcasting: 3\nI0125 00:44:16.997553    3154 log.go:172] (0xc000b9e160) Reply frame received for 3\nI0125 00:44:16.997761    3154 log.go:172] (0xc000b9e160) (0xc0006a4f00) Create stream\nI0125 00:44:16.997855    3154 log.go:172] (0xc000b9e160) (0xc0006a4f00) Stream added, broadcasting: 5\nI0125 00:44:17.000915    3154 log.go:172] (0xc000b9e160) Reply frame received for 5\nI0125 00:44:17.001030    3154 log.go:172] (0xc000b9e160) (0xc0007c6320) Create stream\nI0125 00:44:17.001055    3154 log.go:172] (0xc000b9e160) (0xc0007c6320) Stream added, broadcasting: 7\nI0125 00:44:17.004170    3154 log.go:172] (0xc000b9e160) Reply frame received for 7\nI0125 00:44:17.004603    3154 log.go:172] (0xc0007c6280) (3) Writing data frame\nI0125 00:44:17.004918    3154 log.go:172] (0xc0007c6280) (3) Writing data frame\nI0125 00:44:17.013428    3154 log.go:172] (0xc000b9e160) Data frame received for 5\nI0125 00:44:17.013465    3154 log.go:172] (0xc0006a4f00) (5) Data frame handling\nI0125 00:44:17.013557    3154 log.go:172] (0xc0006a4f00) (5) Data frame sent\nI0125 00:44:17.016630    3154 log.go:172] (0xc000b9e160) Data frame received for 5\nI0125 00:44:17.016653    3154 log.go:172] (0xc0006a4f00) (5) Data frame handling\nI0125 00:44:17.016677    3154 log.go:172] (0xc0006a4f00) (5) Data frame sent\nI0125 00:44:17.943558    3154 log.go:172] (0xc000b9e160) Data frame received for 1\nI0125 00:44:17.943847    3154 log.go:172] (0xc000b9e160) (0xc0007c6280) Stream removed, broadcasting: 3\nI0125 00:44:17.944031    3154 log.go:172] (0xc0007d5720) (1) Data frame handling\nI0125 00:44:17.944084    3154 log.go:172] (0xc0007d5720) (1) Data frame sent\nI0125 00:44:17.944099    3154 log.go:172] (0xc000b9e160) (0xc0006a4f00) Stream removed, broadcasting: 5\nI0125 00:44:17.944157    3154 log.go:172] (0xc000b9e160) (0xc0007d5720) Stream removed, broadcasting: 1\nI0125 00:44:17.945466    3154 log.go:172] (0xc000b9e160) (0xc0007c6320) Stream removed, broadcasting: 7\nI0125 00:44:17.945531    3154 log.go:172] (0xc000b9e160) (0xc0007d5720) Stream removed, broadcasting: 1\nI0125 00:44:17.945549    3154 log.go:172] (0xc000b9e160) (0xc0007c6280) Stream removed, broadcasting: 3\nI0125 00:44:17.945575    3154 log.go:172] (0xc000b9e160) (0xc0006a4f00) Stream removed, broadcasting: 5\nI0125 00:44:17.945590    3154 log.go:172] (0xc000b9e160) (0xc0007c6320) Stream removed, broadcasting: 7\nI0125 00:44:17.946452    3154 log.go:172] (0xc000b9e160) Go away received\n"
Jan 25 00:44:18.001: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:44:20.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2021" for this suite.

• [SLOW TEST:13.103 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":278,"completed":167,"skipped":2784,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:44:20.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 25 00:44:26.240: INFO: &Pod{ObjectMeta:{send-events-cedcb86f-37dd-4e92-9f60-c02b0331d3de  events-7735 /api/v1/namespaces/events-7735/pods/send-events-cedcb86f-37dd-4e92-9f60-c02b0331d3de 87dab401-0c31-40df-a274-1ac039e934b8 4131888 0 2020-01-25 00:44:20 +0000 UTC   map[name:foo time:136604574] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-z5hdv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-z5hdv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-z5hdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:44:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:44:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:44:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:44:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 00:44:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:44:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://3e61cdc315af7e33c8b4dcf581c930eb34fa190b8b962aef696750f46d8e087a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 25 00:44:28.247: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 25 00:44:30.256: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:44:30.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7735" for this suite.

• [SLOW TEST:10.308 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":168,"skipped":2797,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:44:30.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 25 00:44:30.454: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:44:42.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7831" for this suite.

• [SLOW TEST:12.603 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":169,"skipped":2819,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:44:42.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 00:44:43.784: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 00:44:45.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:44:47.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:44:49.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509883, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 00:44:52.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:44:52.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:44:54.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2319" for this suite.
STEP: Destroying namespace "webhook-2319-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.518 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":170,"skipped":2830,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:44:54.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-0379c7e2-35c6-4a27-ba89-0ffac7c06999
STEP: Creating a pod to test consume secrets
Jan 25 00:44:54.594: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c" in namespace "projected-3569" to be "success or failure"
Jan 25 00:44:54.644: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.780707ms
Jan 25 00:44:56.686: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092333988s
Jan 25 00:44:58.690: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096807656s
Jan 25 00:45:00.696: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102765852s
Jan 25 00:45:02.705: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111298598s
Jan 25 00:45:04.713: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.119725654s
Jan 25 00:45:06.730: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.136295092s
STEP: Saw pod success
Jan 25 00:45:06.730: INFO: Pod "pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c" satisfied condition "success or failure"
Jan 25 00:45:06.773: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 00:45:06.834: INFO: Waiting for pod pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c to disappear
Jan 25 00:45:06.841: INFO: Pod pod-projected-secrets-e1958e1e-f2ed-46cf-8626-fb3a8e0f003c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:45:06.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3569" for this suite.

• [SLOW TEST:12.405 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:45:06.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2530
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2530
STEP: Creating statefulset with conflicting port in namespace statefulset-2530
STEP: Waiting until pod test-pod will start running in namespace statefulset-2530
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2530
Jan 25 00:45:17.273: INFO: Observed stateful pod in namespace: statefulset-2530, name: ss-0, uid: ea3468c5-00e0-462b-a2f8-b0bb1e0792d2, status phase: Pending. Waiting for statefulset controller to delete.
Jan 25 00:45:22.312: INFO: Observed stateful pod in namespace: statefulset-2530, name: ss-0, uid: ea3468c5-00e0-462b-a2f8-b0bb1e0792d2, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 00:45:22.321: INFO: Observed stateful pod in namespace: statefulset-2530, name: ss-0, uid: ea3468c5-00e0-462b-a2f8-b0bb1e0792d2, status phase: Failed. Waiting for statefulset controller to delete.
Jan 25 00:45:22.357: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2530
STEP: Removing pod with conflicting port in namespace statefulset-2530
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2530 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 00:45:32.526: INFO: Deleting all statefulset in ns statefulset-2530
Jan 25 00:45:32.530: INFO: Scaling statefulset ss to 0
Jan 25 00:45:42.569: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:45:42.596: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:45:42.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2530" for this suite.

• [SLOW TEST:35.895 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":172,"skipped":2861,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:45:42.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-07b5e4c4-7feb-4917-b34e-2fd7e8e89b25
STEP: Creating a pod to test consume configMaps
Jan 25 00:45:42.899: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5" in namespace "configmap-9505" to be "success or failure"
Jan 25 00:45:42.906: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252882ms
Jan 25 00:45:44.914: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015020819s
Jan 25 00:45:46.920: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020665177s
Jan 25 00:45:48.933: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033891057s
Jan 25 00:45:50.957: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05780864s
STEP: Saw pod success
Jan 25 00:45:50.957: INFO: Pod "pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5" satisfied condition "success or failure"
Jan 25 00:45:50.972: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5 container configmap-volume-test: 
STEP: delete the pod
Jan 25 00:45:51.004: INFO: Waiting for pod pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5 to disappear
Jan 25 00:45:51.051: INFO: Pod pod-configmaps-ba779f10-5970-4c35-9ec5-32bdbeec13d5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:45:51.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9505" for this suite.

• [SLOW TEST:8.347 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2862,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:45:51.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-5089ac85-9c0d-48df-aba1-2ad0b598afc5
STEP: Creating a pod to test consume secrets
Jan 25 00:45:51.379: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380" in namespace "projected-4303" to be "success or failure"
Jan 25 00:45:51.436: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Pending", Reason="", readiness=false. Elapsed: 57.170105ms
Jan 25 00:45:53.446: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066993218s
Jan 25 00:45:55.456: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076821217s
Jan 25 00:45:57.463: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084101547s
Jan 25 00:45:59.485: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105753572s
Jan 25 00:46:01.521: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142342883s
STEP: Saw pod success
Jan 25 00:46:01.521: INFO: Pod "pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380" satisfied condition "success or failure"
Jan 25 00:46:01.526: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380 container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 00:46:01.687: INFO: Waiting for pod pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380 to disappear
Jan 25 00:46:01.694: INFO: Pod pod-projected-secrets-28748ff4-4916-41a9-9c2e-207969866380 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:01.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4303" for this suite.

• [SLOW TEST:10.610 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2872,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:01.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 25 00:46:11.918: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-198 PodName:pod-sharedvolume-1e581baa-dd5c-4a6d-be72-31443e079b0c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 00:46:11.918: INFO: >>> kubeConfig: /root/.kube/config
I0125 00:46:11.971356       9 log.go:172] (0xc00060a630) (0xc0011843c0) Create stream
I0125 00:46:11.971403       9 log.go:172] (0xc00060a630) (0xc0011843c0) Stream added, broadcasting: 1
I0125 00:46:11.974991       9 log.go:172] (0xc00060a630) Reply frame received for 1
I0125 00:46:11.975024       9 log.go:172] (0xc00060a630) (0xc002c74aa0) Create stream
I0125 00:46:11.975034       9 log.go:172] (0xc00060a630) (0xc002c74aa0) Stream added, broadcasting: 3
I0125 00:46:11.976094       9 log.go:172] (0xc00060a630) Reply frame received for 3
I0125 00:46:11.976123       9 log.go:172] (0xc00060a630) (0xc001b48000) Create stream
I0125 00:46:11.976135       9 log.go:172] (0xc00060a630) (0xc001b48000) Stream added, broadcasting: 5
I0125 00:46:11.977157       9 log.go:172] (0xc00060a630) Reply frame received for 5
I0125 00:46:12.053578       9 log.go:172] (0xc00060a630) Data frame received for 3
I0125 00:46:12.053642       9 log.go:172] (0xc002c74aa0) (3) Data frame handling
I0125 00:46:12.053674       9 log.go:172] (0xc002c74aa0) (3) Data frame sent
I0125 00:46:12.181307       9 log.go:172] (0xc00060a630) Data frame received for 1
I0125 00:46:12.181479       9 log.go:172] (0xc00060a630) (0xc002c74aa0) Stream removed, broadcasting: 3
I0125 00:46:12.181584       9 log.go:172] (0xc0011843c0) (1) Data frame handling
I0125 00:46:12.181619       9 log.go:172] (0xc0011843c0) (1) Data frame sent
I0125 00:46:12.181661       9 log.go:172] (0xc00060a630) (0xc001b48000) Stream removed, broadcasting: 5
I0125 00:46:12.181698       9 log.go:172] (0xc00060a630) (0xc0011843c0) Stream removed, broadcasting: 1
I0125 00:46:12.181725       9 log.go:172] (0xc00060a630) Go away received
I0125 00:46:12.182136       9 log.go:172] (0xc00060a630) (0xc0011843c0) Stream removed, broadcasting: 1
I0125 00:46:12.182161       9 log.go:172] (0xc00060a630) (0xc002c74aa0) Stream removed, broadcasting: 3
I0125 00:46:12.182173       9 log.go:172] (0xc00060a630) (0xc001b48000) Stream removed, broadcasting: 5
Jan 25 00:46:12.182: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:12.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-198" for this suite.

• [SLOW TEST:10.490 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":175,"skipped":2893,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:12.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1789
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 00:46:12.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3334'
Jan 25 00:46:12.537: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 00:46:12.537: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1794
Jan 25 00:46:12.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3334'
Jan 25 00:46:12.975: INFO: stderr: ""
Jan 25 00:46:12.975: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3334" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":176,"skipped":2897,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:12.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:24.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3760" for this suite.

• [SLOW TEST:11.431 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":177,"skipped":2899,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:24.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 00:46:25.118: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 00:46:27.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:46:29.141: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:46:31.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:46:33.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715509985, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 00:46:36.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:36.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6702" for this suite.
STEP: Destroying namespace "webhook-6702-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.262 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":178,"skipped":2917,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:36.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 00:46:36.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a" in namespace "downward-api-3306" to be "success or failure"
Jan 25 00:46:36.769: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069744ms
Jan 25 00:46:38.780: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01401679s
Jan 25 00:46:40.785: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019478331s
Jan 25 00:46:42.793: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026835317s
Jan 25 00:46:44.801: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035632879s
Jan 25 00:46:46.827: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061052489s
STEP: Saw pod success
Jan 25 00:46:46.827: INFO: Pod "downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a" satisfied condition "success or failure"
Jan 25 00:46:46.832: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a container client-container: 
STEP: delete the pod
Jan 25 00:46:46.896: INFO: Waiting for pod downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a to disappear
Jan 25 00:46:46.926: INFO: Pod downwardapi-volume-0fcda5a3-7b71-467c-a816-0923be2c358a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:46:46.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3306" for this suite.

• [SLOW TEST:10.296 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2930,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:46:46.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 25 00:46:47.126: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132708 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 00:46:47.127: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132708 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 25 00:46:57.142: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132742 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 25 00:46:57.142: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132742 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 25 00:47:07.156: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132765 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 00:47:07.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132765 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 25 00:47:17.171: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132788 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 00:47:17.171: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-a 422d1749-6dd6-43b8-8b72-d6d215530d1a 4132788 0 2020-01-25 00:46:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 25 00:47:27.180: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-b 811633a0-fc80-4009-9270-e0c01d2854ef 4132812 0 2020-01-25 00:47:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 00:47:27.180: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-b 811633a0-fc80-4009-9270-e0c01d2854ef 4132812 0 2020-01-25 00:47:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 25 00:47:37.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-b 811633a0-fc80-4009-9270-e0c01d2854ef 4132836 0 2020-01-25 00:47:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 00:47:37.191: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7413 /api/v1/namespaces/watch-7413/configmaps/e2e-watch-test-configmap-b 811633a0-fc80-4009-9270-e0c01d2854ef 4132836 0 2020-01-25 00:47:27 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:47:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7413" for this suite.

• [SLOW TEST:60.234 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":180,"skipped":2945,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:47:47.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-a5763760-2b6e-4d6e-a10d-a697e9e9710c in namespace container-probe-7453
Jan 25 00:47:55.503: INFO: Started pod busybox-a5763760-2b6e-4d6e-a10d-a697e9e9710c in namespace container-probe-7453
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 00:47:55.509: INFO: Initial restart count of pod busybox-a5763760-2b6e-4d6e-a10d-a697e9e9710c is 0
Jan 25 00:48:47.739: INFO: Restart count of pod container-probe-7453/busybox-a5763760-2b6e-4d6e-a10d-a697e9e9710c is now 1 (52.229471547s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:48:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7453" for this suite.

• [SLOW TEST:60.658 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2993,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:48:47.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:48:48.050: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 25 00:48:48.056: INFO: Number of nodes with available pods: 0
Jan 25 00:48:48.056: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 25 00:48:48.093: INFO: Number of nodes with available pods: 0
Jan 25 00:48:48.093: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:49.102: INFO: Number of nodes with available pods: 0
Jan 25 00:48:49.102: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:50.100: INFO: Number of nodes with available pods: 0
Jan 25 00:48:50.100: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:51.281: INFO: Number of nodes with available pods: 0
Jan 25 00:48:51.281: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:53.584: INFO: Number of nodes with available pods: 0
Jan 25 00:48:53.584: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:54.287: INFO: Number of nodes with available pods: 0
Jan 25 00:48:54.287: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:55.321: INFO: Number of nodes with available pods: 0
Jan 25 00:48:55.321: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:56.103: INFO: Number of nodes with available pods: 0
Jan 25 00:48:56.103: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:57.100: INFO: Number of nodes with available pods: 1
Jan 25 00:48:57.100: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 25 00:48:57.219: INFO: Number of nodes with available pods: 1
Jan 25 00:48:57.219: INFO: Number of running nodes: 0, number of available pods: 1
Jan 25 00:48:58.225: INFO: Number of nodes with available pods: 0
Jan 25 00:48:58.226: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 25 00:48:58.307: INFO: Number of nodes with available pods: 0
Jan 25 00:48:58.307: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:48:59.312: INFO: Number of nodes with available pods: 0
Jan 25 00:48:59.312: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:00.316: INFO: Number of nodes with available pods: 0
Jan 25 00:49:00.316: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:01.438: INFO: Number of nodes with available pods: 0
Jan 25 00:49:01.438: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:02.313: INFO: Number of nodes with available pods: 0
Jan 25 00:49:02.313: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:03.313: INFO: Number of nodes with available pods: 0
Jan 25 00:49:03.313: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:04.315: INFO: Number of nodes with available pods: 0
Jan 25 00:49:04.315: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:05.572: INFO: Number of nodes with available pods: 0
Jan 25 00:49:05.572: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:06.370: INFO: Number of nodes with available pods: 0
Jan 25 00:49:06.370: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:07.312: INFO: Number of nodes with available pods: 0
Jan 25 00:49:07.312: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:08.443: INFO: Number of nodes with available pods: 0
Jan 25 00:49:08.443: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:09.315: INFO: Number of nodes with available pods: 0
Jan 25 00:49:09.315: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:10.374: INFO: Number of nodes with available pods: 0
Jan 25 00:49:10.374: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:11.314: INFO: Number of nodes with available pods: 0
Jan 25 00:49:11.314: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 25 00:49:12.340: INFO: Number of nodes with available pods: 1
Jan 25 00:49:12.340: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2660, will wait for the garbage collector to delete the pods
Jan 25 00:49:12.413: INFO: Deleting DaemonSet.extensions daemon-set took: 8.313016ms
Jan 25 00:49:12.713: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292034ms
Jan 25 00:49:19.319: INFO: Number of nodes with available pods: 0
Jan 25 00:49:19.319: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 00:49:19.324: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2660/daemonsets","resourceVersion":"4133154"},"items":null}

Jan 25 00:49:19.328: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2660/pods","resourceVersion":"4133154"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:49:19.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2660" for this suite.

• [SLOW TEST:31.566 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":182,"skipped":3008,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:49:19.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0125 00:49:50.171520       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 00:49:50.171: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:49:50.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7368" for this suite.

• [SLOW TEST:30.746 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":183,"skipped":3010,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:49:50.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 00:49:50.316: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc" in namespace "projected-5656" to be "success or failure"
Jan 25 00:49:50.328: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 11.485943ms
Jan 25 00:49:52.334: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017836221s
Jan 25 00:49:54.341: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024735539s
Jan 25 00:49:56.381: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064695275s
Jan 25 00:49:58.429: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112684707s
Jan 25 00:50:00.690: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374218861s
Jan 25 00:50:02.698: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.382424837s
STEP: Saw pod success
Jan 25 00:50:02.699: INFO: Pod "downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc" satisfied condition "success or failure"
Jan 25 00:50:02.706: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc container client-container: 
STEP: delete the pod
Jan 25 00:50:02.771: INFO: Waiting for pod downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc to disappear
Jan 25 00:50:02.783: INFO: Pod downwardapi-volume-a5eabe9e-db27-44d9-983e-b206bf3cb5bc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:50:02.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5656" for this suite.

• [SLOW TEST:12.610 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3011,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:50:02.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 00:50:11.114: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:50:11.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8907" for this suite.

• [SLOW TEST:8.371 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3024,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:50:11.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 00:50:19.860: INFO: Successfully updated pod "labelsupdatefdcbb750-b146-41c8-ac3e-e122598ed216"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:50:23.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6328" for this suite.

• [SLOW TEST:12.777 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3034,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:50:23.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1597
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 00:50:24.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9060'
Jan 25 00:50:24.143: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 00:50:24.144: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1603
Jan 25 00:50:26.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9060'
Jan 25 00:50:26.416: INFO: stderr: ""
Jan 25 00:50:26.416: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:50:26.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9060" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":187,"skipped":3039,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:50:26.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:50:26.630: INFO: Creating deployment "webserver-deployment"
Jan 25 00:50:26.639: INFO: Waiting for observed generation 1
Jan 25 00:50:29.496: INFO: Waiting for all required pods to come up
Jan 25 00:50:30.371: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 25 00:50:56.880: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 25 00:50:56.889: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 25 00:50:56.900: INFO: Updating deployment webserver-deployment
Jan 25 00:50:56.900: INFO: Waiting for observed generation 2
Jan 25 00:50:59.458: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 25 00:50:59.501: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 25 00:50:59.866: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 25 00:51:00.106: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 25 00:51:00.106: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 25 00:51:00.131: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 25 00:51:00.140: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 25 00:51:00.140: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 25 00:51:00.150: INFO: Updating deployment webserver-deployment
Jan 25 00:51:00.150: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 25 00:51:01.666: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 25 00:51:05.997: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 25 00:51:08.446: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1273 /apis/apps/v1/namespaces/deployment-1273/deployments/webserver-deployment 20c7bfe8-3764-4e1e-b368-b60b1af3baa1 4133781 3 2020-01-25 00:50:26 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00526cf48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-25 00:51:01 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-25 00:51:04 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 25 00:51:09.448: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-1273 /apis/apps/v1/namespaces/deployment-1273/replicasets/webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 4133777 3 2020-01-25 00:50:56 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 20c7bfe8-3764-4e1e-b368-b60b1af3baa1 0xc00526d3f7 0xc00526d3f8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00526d468  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 25 00:51:09.448: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 25 00:51:09.448: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-1273 /apis/apps/v1/namespaces/deployment-1273/replicasets/webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 4133767 3 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 20c7bfe8-3764-4e1e-b368-b60b1af3baa1 0xc00526d337 0xc00526d338}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00526d398  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 25 00:51:10.106: INFO: Pod "webserver-deployment-595b5b9587-2cjb6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2cjb6 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-2cjb6 d4eb1d03-3a29-4c0a-99ab-2d6f31aa10b6 4133793 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522f607 0xc00522f608}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 00:51:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.107: INFO: Pod "webserver-deployment-595b5b9587-2dj8v" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-2dj8v webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-2dj8v fe183d08-bcd5-4de9-b8cf-e7c78521a341 4133612 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522f787 0xc00522f788}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-25 00:50:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dab16677bc84ebefee305482cea10b39ffb021ecfac9cc26d51f183b6c542812,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.107: INFO: Pod "webserver-deployment-595b5b9587-6gv9x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6gv9x webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-6gv9x 0a47ff7a-a4e8-4cf2-ac15-3bcc595409cb 4133758 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522f900 0xc00522f901}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.107: INFO: Pod "webserver-deployment-595b5b9587-7q6b2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7q6b2 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-7q6b2 1e5e8803-1b6c-4024-8a91-2c28de6411a7 4133741 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522fa07 0xc00522fa08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.108: INFO: Pod "webserver-deployment-595b5b9587-8ctpn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8ctpn webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-8ctpn 900ba532-81e0-4361-81cd-4b1c5faf93c1 4133745 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522fb17 0xc00522fb18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.108: INFO: Pod "webserver-deployment-595b5b9587-9dgs2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9dgs2 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-9dgs2 05c8116a-d632-43af-9be2-aee4cda30e78 4133789 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522fc37 0xc00522fc38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 00:51:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.108: INFO: Pod "webserver-deployment-595b5b9587-fh9hs" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fh9hs webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-fh9hs 24fc6d9d-621e-4bc0-a1a2-37754000fbbf 4133620 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522fd87 0xc00522fd88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-01-25 00:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f60c87213ae210a468447078c013e1a97f1b85295abf1dd1caf7a446ed49743a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.109: INFO: Pod "webserver-deployment-595b5b9587-h7jd7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7jd7 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-h7jd7 a4f308d9-fde5-4576-ab82-d6ebd2ab3caf 4133637 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc00522ff00 0xc00522ff01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-25 00:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://672e40f76a419af287bb825183c0653879d70f2498866ead9cb9c3e54cc3bd7e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.109: INFO: Pod "webserver-deployment-595b5b9587-ht6cl" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ht6cl webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-ht6cl fac81c6d-3d69-4d83-834a-88f6702ed848 4133631 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0060 0xc0062a0061}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-25 00:50:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://40013098e758d7573d9211a6ed5c8ba67ed1062a5dd10b37649598ec9266c621,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.109: INFO: Pod "webserver-deployment-595b5b9587-knwvg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-knwvg webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-knwvg 2030d365-23b0-4d0e-b9c4-3020886a464d 4133718 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a01c0 0xc0062a01c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.109: INFO: Pod "webserver-deployment-595b5b9587-kwnp8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kwnp8 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-kwnp8 582c9467-5fb8-4947-ad28-f140b65b1eef 4133761 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a02d7 0xc0062a02d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.109: INFO: Pod "webserver-deployment-595b5b9587-nch2g" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nch2g webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-nch2g 4e0c7df4-4c0c-4919-8341-833828d0515f 4133715 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a03f7 0xc0062a03f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.110: INFO: Pod "webserver-deployment-595b5b9587-phgpd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-phgpd webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-phgpd c9ea8cec-11ec-44d9-b7c6-9e1ad1b15516 4133601 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0517 0xc0062a0518}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-25 00:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ed741193e91a9c7b4e4a091530f0250d129b960eb8591af4cd7503ba09696beb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.110: INFO: Pod "webserver-deployment-595b5b9587-pj9lt" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-pj9lt webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-pj9lt e81656e0-d582-46e7-8e0d-ecde12ae8da5 4133634 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0690 0xc0062a0691}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-25 00:50:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://caa51c72cb6c17910cab315dc4c915ddc34a21d92424b898e6af514bcceae5c0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.110: INFO: Pod "webserver-deployment-595b5b9587-t285v" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t285v webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-t285v 102b69ca-ee56-4efb-bcd6-60ed4b160d00 4133756 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a07f0 0xc0062a07f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.111: INFO: Pod "webserver-deployment-595b5b9587-vjqrk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vjqrk webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-vjqrk 7f4c04e8-0944-4e54-abe6-20e359cfcb4c 4133623 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0907 0xc0062a0908}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-25 00:50:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://51786a62f0209a94129484eea55e9492b6fdb2b3053c0ef67522a5d41fa73f84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.111: INFO: Pod "webserver-deployment-595b5b9587-w5lp5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5lp5 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-w5lp5 d9f14e5f-ab78-4c39-a58e-82435bfe4ea6 4133759 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0a80 0xc0062a0a81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.111: INFO: Pod "webserver-deployment-595b5b9587-w72rk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w72rk webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-w72rk 771f6c51-25f3-4541-85d2-62d5f8ca5f11 4133757 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0b87 0xc0062a0b88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.111: INFO: Pod "webserver-deployment-595b5b9587-xgfx4" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xgfx4 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-xgfx4 d8a3ca01-0126-428b-bc02-4bd2dd1d1d9a 4133640 0 2020-01-25 00:50:26 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0c97 0xc0062a0c98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-25 00:50:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 00:50:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://558ebcd72436327c250a7d7428051c3dd4951b9ff6f91123b0f544de1c2508eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.111: INFO: Pod "webserver-deployment-595b5b9587-zxpn6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zxpn6 webserver-deployment-595b5b9587- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-595b5b9587-zxpn6 77ffb3cf-8668-41f9-8bda-327006b6103c 4133740 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1f115a2a-e638-4800-a6da-19cd7b13da7c 0xc0062a0e00 0xc0062a0e01}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.112: INFO: Pod "webserver-deployment-c7997dcc8-2k9w7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2k9w7 webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-2k9w7 f2756bd8-ae01-475c-9791-c22ffa0360d4 4133766 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a0f17 0xc0062a0f18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 00:51:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.112: INFO: Pod "webserver-deployment-c7997dcc8-2smld" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2smld webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-2smld 4a15769a-1588-4a1f-9dd5-bdb0dec3935b 4133673 0 2020-01-25 00:50:56 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1087 0xc0062a1088}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 00:50:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.112: INFO: Pod "webserver-deployment-c7997dcc8-4p5ms" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4p5ms webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-4p5ms 04519558-6fd2-4039-aeb7-207416fe764e 4133731 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a11f7 0xc0062a11f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.113: INFO: Pod "webserver-deployment-c7997dcc8-5w2wt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5w2wt webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-5w2wt 6def15b4-7529-43ca-be5f-b864a5fa5b8f 4133762 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1327 0xc0062a1328}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.113: INFO: Pod "webserver-deployment-c7997dcc8-9n4rh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9n4rh webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-9n4rh 0cf887a0-ca66-4fc1-9ac0-b5347de81f86 4133676 0 2020-01-25 00:50:56 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1447 0xc0062a1448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 00:50:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.113: INFO: Pod "webserver-deployment-c7997dcc8-fhvr5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fhvr5 webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-fhvr5 2b69e9fc-12e9-48f7-ab7d-377c9f6eef18 4133760 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a15c7 0xc0062a15c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.114: INFO: Pod "webserver-deployment-c7997dcc8-jvl76" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jvl76 webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-jvl76 99c5e259-07ae-4cdb-bb2d-1627a383eadb 4133755 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a16e7 0xc0062a16e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.114: INFO: Pod "webserver-deployment-c7997dcc8-l6tlg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l6tlg webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-l6tlg ecd3dde2-7cbc-4db6-8a02-6047d7a670b2 4133697 0 2020-01-25 00:50:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1817 0xc0062a1818}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 00:50:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.114: INFO: Pod "webserver-deployment-c7997dcc8-m92hb" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-m92hb webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-m92hb f24db327-cc2e-49eb-822c-642104442dc9 4133763 0 2020-01-25 00:51:02 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a19a7 0xc0062a19a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.115: INFO: Pod "webserver-deployment-c7997dcc8-qh9xh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qh9xh webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-qh9xh b9092040-e075-46c6-a73b-8c73c80092e0 4133749 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1ad7 0xc0062a1ad8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.115: INFO: Pod "webserver-deployment-c7997dcc8-v6p8h" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v6p8h webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-v6p8h 0243036c-40ba-4555-9ee1-05da95e95c6a 4133743 0 2020-01-25 00:51:01 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1c07 0xc0062a1c08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:51:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.116: INFO: Pod "webserver-deployment-c7997dcc8-wgzxr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wgzxr webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-wgzxr a35a832c-5bba-435f-ae4f-5dcc6ba3ec5e 4133698 0 2020-01-25 00:50:57 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1d27 0xc0062a1d28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-25 00:50:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 25 00:51:10.116: INFO: Pod "webserver-deployment-c7997dcc8-z667m" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z667m webserver-deployment-c7997dcc8- deployment-1273 /api/v1/namespaces/deployment-1273/pods/webserver-deployment-c7997dcc8-z667m 95afb46d-d225-4fd1-826d-64ceb73455ec 4133670 0 2020-01-25 00:50:56 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 159ea485-93b2-4099-8ae5-561136c7ec8b 0xc0062a1e97 0xc0062a1e98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4s2lx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4s2lx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4s2lx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 00:50:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-25 00:50:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:51:10.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1273" for this suite.

• [SLOW TEST:45.099 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":188,"skipped":3052,"failed":0}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:51:11.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9649
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-9649
Jan 25 00:51:18.872: INFO: Found 0 stateful pods, waiting for 1
Jan 25 00:51:29.073: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:51:38.882: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:51:50.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:51:59.227: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:52:08.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:52:18.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:52:28.880: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:52:38.881: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 00:52:38.912: INFO: Deleting all statefulset in ns statefulset-9649
Jan 25 00:52:38.915: INFO: Scaling statefulset ss to 0
Jan 25 00:52:59.124: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:52:59.133: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:52:59.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9649" for this suite.

• [SLOW TEST:107.586 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":189,"skipped":3052,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:52:59.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Jan 25 00:52:59.292: INFO: Waiting up to 5m0s for pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade" in namespace "containers-414" to be "success or failure"
Jan 25 00:52:59.296: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617758ms
Jan 25 00:53:01.304: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011509024s
Jan 25 00:53:03.313: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020961693s
Jan 25 00:53:05.340: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048235242s
Jan 25 00:53:07.359: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066547867s
STEP: Saw pod success
Jan 25 00:53:07.359: INFO: Pod "client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade" satisfied condition "success or failure"
Jan 25 00:53:07.361: INFO: Trying to get logs from node jerma-node pod client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade container test-container: 
STEP: delete the pod
Jan 25 00:53:07.414: INFO: Waiting for pod client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade to disappear
Jan 25 00:53:07.419: INFO: Pod client-containers-0d559f2a-dd41-4bd3-8ded-861ff25c3ade no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:53:07.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-414" for this suite.

• [SLOW TEST:8.315 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3072,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:53:07.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
STEP: creating the pod
Jan 25 00:53:07.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2549'
Jan 25 00:53:08.342: INFO: stderr: ""
Jan 25 00:53:08.342: INFO: stdout: "pod/pause created\n"
Jan 25 00:53:08.342: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 25 00:53:08.342: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2549" to be "running and ready"
Jan 25 00:53:08.348: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.882741ms
Jan 25 00:53:10.356: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013094179s
Jan 25 00:53:12.364: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02127165s
Jan 25 00:53:14.369: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026991947s
Jan 25 00:53:16.395: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.052599432s
Jan 25 00:53:16.395: INFO: Pod "pause" satisfied condition "running and ready"
Jan 25 00:53:16.395: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 25 00:53:16.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2549'
Jan 25 00:53:16.679: INFO: stderr: ""
Jan 25 00:53:16.679: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 25 00:53:16.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2549'
Jan 25 00:53:16.773: INFO: stderr: ""
Jan 25 00:53:16.773: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 25 00:53:16.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2549'
Jan 25 00:53:16.878: INFO: stderr: ""
Jan 25 00:53:16.878: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 25 00:53:16.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2549'
Jan 25 00:53:17.111: INFO: stderr: ""
Jan 25 00:53:17.111: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390
STEP: using delete to clean up resources
Jan 25 00:53:17.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2549'
Jan 25 00:53:17.313: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 00:53:17.313: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 25 00:53:17.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2549'
Jan 25 00:53:17.475: INFO: stderr: "No resources found in kubectl-2549 namespace.\n"
Jan 25 00:53:17.475: INFO: stdout: ""
Jan 25 00:53:17.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2549 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 00:53:17.610: INFO: stderr: ""
Jan 25 00:53:17.610: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:53:17.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2549" for this suite.

• [SLOW TEST:10.118 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":278,"completed":191,"skipped":3079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:53:17.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:53:17.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 25 00:53:18.442: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:18Z generation:1 name:name1 resourceVersion:4134391 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e6f38bd-e907-48b5-a564-f4c0fcf4e54c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 25 00:53:28.451: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:28Z generation:1 name:name2 resourceVersion:4134424 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:572e1a28-3e4f-42fc-a176-a500c32b6b77] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 25 00:53:38.461: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:18Z generation:2 name:name1 resourceVersion:4134445 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e6f38bd-e907-48b5-a564-f4c0fcf4e54c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 25 00:53:48.472: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:28Z generation:2 name:name2 resourceVersion:4134468 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:572e1a28-3e4f-42fc-a176-a500c32b6b77] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 25 00:53:58.485: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:18Z generation:2 name:name1 resourceVersion:4134492 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:6e6f38bd-e907-48b5-a564-f4c0fcf4e54c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 25 00:54:08.499: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-25T00:53:28Z generation:2 name:name2 resourceVersion:4134516 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:572e1a28-3e4f-42fc-a176-a500c32b6b77] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:54:19.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-9499" for this suite.

• [SLOW TEST:61.439 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":192,"skipped":3110,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:54:19.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 25 00:54:25.813: INFO: Successfully updated pod "pod-update-activedeadlineseconds-66e875d9-f0ad-4526-8835-e3bc9a777690"
Jan 25 00:54:25.813: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-66e875d9-f0ad-4526-8835-e3bc9a777690" in namespace "pods-2412" to be "terminated due to deadline exceeded"
Jan 25 00:54:25.829: INFO: Pod "pod-update-activedeadlineseconds-66e875d9-f0ad-4526-8835-e3bc9a777690": Phase="Running", Reason="", readiness=true. Elapsed: 16.10409ms
Jan 25 00:54:27.846: INFO: Pod "pod-update-activedeadlineseconds-66e875d9-f0ad-4526-8835-e3bc9a777690": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032371177s
Jan 25 00:54:27.846: INFO: Pod "pod-update-activedeadlineseconds-66e875d9-f0ad-4526-8835-e3bc9a777690" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:54:27.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2412" for this suite.

• [SLOW TEST:8.810 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3116,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:54:27.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-7856fef8-e1da-49d8-97a4-a3d93d8ed36f
STEP: Creating configMap with name cm-test-opt-upd-9d2265a7-5962-438a-a8f2-e27a32f4ff15
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-7856fef8-e1da-49d8-97a4-a3d93d8ed36f
STEP: Updating configmap cm-test-opt-upd-9d2265a7-5962-438a-a8f2-e27a32f4ff15
STEP: Creating configMap with name cm-test-opt-create-bf983eeb-a946-433c-b082-f2c6da0ac76e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:54:42.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1477" for this suite.

• [SLOW TEST:14.469 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3126,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:54:42.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-f02f987d-f01a-4fd9-bfa3-64861f5b60b2 in namespace container-probe-3631
Jan 25 00:54:52.449: INFO: Started pod liveness-f02f987d-f01a-4fd9-bfa3-64861f5b60b2 in namespace container-probe-3631
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 00:54:52.474: INFO: Initial restart count of pod liveness-f02f987d-f01a-4fd9-bfa3-64861f5b60b2 is 0
Jan 25 00:55:16.605: INFO: Restart count of pod container-probe-3631/liveness-f02f987d-f01a-4fd9-bfa3-64861f5b60b2 is now 1 (24.131134915s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:55:16.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3631" for this suite.

• [SLOW TEST:34.344 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:55:16.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 25 00:55:16.874: INFO: Number of nodes with available pods: 0
Jan 25 00:55:16.874: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:17.899: INFO: Number of nodes with available pods: 0
Jan 25 00:55:17.900: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:19.343: INFO: Number of nodes with available pods: 0
Jan 25 00:55:19.344: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:19.886: INFO: Number of nodes with available pods: 0
Jan 25 00:55:19.886: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:20.944: INFO: Number of nodes with available pods: 0
Jan 25 00:55:20.944: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:21.971: INFO: Number of nodes with available pods: 0
Jan 25 00:55:21.971: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:22.895: INFO: Number of nodes with available pods: 0
Jan 25 00:55:22.895: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:24.724: INFO: Number of nodes with available pods: 0
Jan 25 00:55:24.724: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:25.652: INFO: Number of nodes with available pods: 0
Jan 25 00:55:25.652: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:25.892: INFO: Number of nodes with available pods: 0
Jan 25 00:55:25.892: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:26.919: INFO: Number of nodes with available pods: 1
Jan 25 00:55:26.919: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:27.883: INFO: Number of nodes with available pods: 1
Jan 25 00:55:27.883: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:55:28.888: INFO: Number of nodes with available pods: 2
Jan 25 00:55:28.888: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 25 00:55:28.978: INFO: Number of nodes with available pods: 2
Jan 25 00:55:28.978: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-896, will wait for the garbage collector to delete the pods
Jan 25 00:55:30.122: INFO: Deleting DaemonSet.extensions daemon-set took: 8.235244ms
Jan 25 00:55:30.523: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.585935ms
Jan 25 00:55:36.728: INFO: Number of nodes with available pods: 0
Jan 25 00:55:36.728: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 00:55:36.731: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-896/daemonsets","resourceVersion":"4134886"},"items":null}

Jan 25 00:55:36.736: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-896/pods","resourceVersion":"4134886"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:55:36.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-896" for this suite.

• [SLOW TEST:20.077 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":196,"skipped":3179,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:55:36.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 25 00:55:36.933: INFO: >>> kubeConfig: /root/.kube/config
Jan 25 00:55:39.869: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:55:49.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7774" for this suite.

• [SLOW TEST:12.846 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":197,"skipped":3195,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:55:49.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1797
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 25 00:55:49.740: INFO: Found 0 stateful pods, waiting for 3
Jan 25 00:55:59.749: INFO: Found 2 stateful pods, waiting for 3
Jan 25 00:56:09.749: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:09.749: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:09.749: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:56:19.749: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:19.749: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:19.749: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 25 00:56:19.788: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 25 00:56:29.893: INFO: Updating stateful set ss2
Jan 25 00:56:29.940: INFO: Waiting for Pod statefulset-1797/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 25 00:56:40.397: INFO: Found 2 stateful pods, waiting for 3
Jan 25 00:56:50.407: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:50.407: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:56:50.407: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 25 00:57:00.408: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:57:00.408: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 25 00:57:00.408: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 25 00:57:00.441: INFO: Updating stateful set ss2
Jan 25 00:57:00.453: INFO: Waiting for Pod statefulset-1797/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 00:57:10.888: INFO: Updating stateful set ss2
Jan 25 00:57:11.089: INFO: Waiting for StatefulSet statefulset-1797/ss2 to complete update
Jan 25 00:57:11.089: INFO: Waiting for Pod statefulset-1797/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 25 00:57:21.098: INFO: Waiting for StatefulSet statefulset-1797/ss2 to complete update
Jan 25 00:57:21.098: INFO: Waiting for Pod statefulset-1797/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 25 00:57:31.099: INFO: Deleting all statefulset in ns statefulset-1797
Jan 25 00:57:31.103: INFO: Scaling statefulset ss2 to 0
Jan 25 00:58:01.132: INFO: Waiting for statefulset status.replicas updated to 0
Jan 25 00:58:01.137: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:58:01.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1797" for this suite.

• [SLOW TEST:131.575 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":198,"skipped":3230,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:58:01.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1633
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 00:58:01.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7976'
Jan 25 00:58:03.412: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 00:58:03.412: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 25 00:58:03.432: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-zsrjr]
Jan 25 00:58:03.432: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-zsrjr" in namespace "kubectl-7976" to be "running and ready"
Jan 25 00:58:03.460: INFO: Pod "e2e-test-httpd-rc-zsrjr": Phase="Pending", Reason="", readiness=false. Elapsed: 27.599665ms
Jan 25 00:58:05.468: INFO: Pod "e2e-test-httpd-rc-zsrjr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035890768s
Jan 25 00:58:07.476: INFO: Pod "e2e-test-httpd-rc-zsrjr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043906262s
Jan 25 00:58:09.484: INFO: Pod "e2e-test-httpd-rc-zsrjr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051535863s
Jan 25 00:58:11.490: INFO: Pod "e2e-test-httpd-rc-zsrjr": Phase="Running", Reason="", readiness=true. Elapsed: 8.057299948s
Jan 25 00:58:11.490: INFO: Pod "e2e-test-httpd-rc-zsrjr" satisfied condition "running and ready"
Jan 25 00:58:11.490: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-zsrjr]
Jan 25 00:58:11.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7976'
Jan 25 00:58:11.750: INFO: stderr: ""
Jan 25 00:58:11.750: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Sat Jan 25 00:58:09.703077 2020] [mpm_event:notice] [pid 1:tid 140316079106920] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Jan 25 00:58:09.703166 2020] [core:notice] [pid 1:tid 140316079106920] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1638
Jan 25 00:58:11.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7976'
Jan 25 00:58:11.922: INFO: stderr: ""
Jan 25 00:58:11.922: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:58:11.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7976" for this suite.

• [SLOW TEST:10.755 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":199,"skipped":3261,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:58:11.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 00:58:12.520: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 00:58:14.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:58:16.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:58:18.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 00:58:20.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715510692, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 00:58:24.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:58:24.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5402-crds.webhook.example.com via the AdmissionRegistration API
Jan 25 00:58:24.941: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:58:25.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7467" for this suite.
STEP: Destroying namespace "webhook-7467-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.982 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":200,"skipped":3267,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:58:25.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:58:26.070: INFO: Create a RollingUpdate DaemonSet
Jan 25 00:58:26.073: INFO: Check that daemon pods launch on every node of the cluster
Jan 25 00:58:26.111: INFO: Number of nodes with available pods: 0
Jan 25 00:58:26.111: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:27.121: INFO: Number of nodes with available pods: 0
Jan 25 00:58:27.121: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:28.237: INFO: Number of nodes with available pods: 0
Jan 25 00:58:28.238: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:29.126: INFO: Number of nodes with available pods: 0
Jan 25 00:58:29.126: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:30.122: INFO: Number of nodes with available pods: 0
Jan 25 00:58:30.122: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:32.492: INFO: Number of nodes with available pods: 0
Jan 25 00:58:32.492: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:33.945: INFO: Number of nodes with available pods: 0
Jan 25 00:58:33.945: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:34.159: INFO: Number of nodes with available pods: 0
Jan 25 00:58:34.159: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:35.126: INFO: Number of nodes with available pods: 0
Jan 25 00:58:35.126: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:36.122: INFO: Number of nodes with available pods: 1
Jan 25 00:58:36.122: INFO: Node jerma-node is running more than one daemon pod
Jan 25 00:58:37.120: INFO: Number of nodes with available pods: 2
Jan 25 00:58:37.120: INFO: Number of running nodes: 2, number of available pods: 2
Jan 25 00:58:37.120: INFO: Update the DaemonSet to trigger a rollout
Jan 25 00:58:37.125: INFO: Updating DaemonSet daemon-set
Jan 25 00:58:53.290: INFO: Roll back the DaemonSet before rollout is complete
Jan 25 00:58:53.301: INFO: Updating DaemonSet daemon-set
Jan 25 00:58:53.301: INFO: Make sure DaemonSet rollback is complete
Jan 25 00:58:53.311: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:53.311: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:54.348: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:54.348: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:55.346: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:55.347: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:56.352: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:56.352: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:57.349: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:57.349: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:58.369: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:58.369: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:58:59.348: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:58:59.348: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:59:00.346: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:59:00.346: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:59:01.347: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:59:01.347: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:59:02.345: INFO: Wrong image for pod: daemon-set-5tb82. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 25 00:59:02.345: INFO: Pod daemon-set-5tb82 is not available
Jan 25 00:59:03.366: INFO: Pod daemon-set-snclr is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9883, will wait for the garbage collector to delete the pods
Jan 25 00:59:03.458: INFO: Deleting DaemonSet.extensions daemon-set took: 23.782412ms
Jan 25 00:59:04.758: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.300562909s
Jan 25 00:59:11.065: INFO: Number of nodes with available pods: 0
Jan 25 00:59:11.065: INFO: Number of running nodes: 0, number of available pods: 0
Jan 25 00:59:11.071: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9883/daemonsets","resourceVersion":"4135873"},"items":null}

Jan 25 00:59:11.076: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9883/pods","resourceVersion":"4135873"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:11.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9883" for this suite.

• [SLOW TEST:45.179 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":201,"skipped":3269,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:11.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 25 00:59:11.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4241'
Jan 25 00:59:11.680: INFO: stderr: ""
Jan 25 00:59:11.680: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 25 00:59:12.687: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:12.687: INFO: Found 0 / 1
Jan 25 00:59:13.692: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:13.692: INFO: Found 0 / 1
Jan 25 00:59:14.690: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:14.690: INFO: Found 0 / 1
Jan 25 00:59:15.691: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:15.691: INFO: Found 0 / 1
Jan 25 00:59:16.767: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:16.767: INFO: Found 0 / 1
Jan 25 00:59:17.687: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:17.687: INFO: Found 1 / 1
Jan 25 00:59:17.687: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 25 00:59:17.692: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:17.692: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 25 00:59:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-6jdv7 --namespace=kubectl-4241 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 25 00:59:17.887: INFO: stderr: ""
Jan 25 00:59:17.888: INFO: stdout: "pod/agnhost-master-6jdv7 patched\n"
STEP: checking annotations
Jan 25 00:59:17.904: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 25 00:59:17.904: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:17.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4241" for this suite.

• [SLOW TEST:6.839 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":202,"skipped":3278,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:17.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 00:59:18.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:26.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7142" for this suite.

• [SLOW TEST:8.417 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3282,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:26.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 25 00:59:26.500: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7167 /api/v1/namespaces/watch-7167/configmaps/e2e-watch-test-watch-closed 749a8997-6058-40cc-8075-dfcdf07afb11 4135979 0 2020-01-25 00:59:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 25 00:59:26.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7167 /api/v1/namespaces/watch-7167/configmaps/e2e-watch-test-watch-closed 749a8997-6058-40cc-8075-dfcdf07afb11 4135980 0 2020-01-25 00:59:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 25 00:59:26.521: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7167 /api/v1/namespaces/watch-7167/configmaps/e2e-watch-test-watch-closed 749a8997-6058-40cc-8075-dfcdf07afb11 4135981 0 2020-01-25 00:59:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 25 00:59:26.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7167 /api/v1/namespaces/watch-7167/configmaps/e2e-watch-test-watch-closed 749a8997-6058-40cc-8075-dfcdf07afb11 4135982 0 2020-01-25 00:59:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:26.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7167" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":204,"skipped":3355,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:26.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-22086403-93ab-4712-81bc-11cd86dd873f
STEP: Creating a pod to test consume secrets
Jan 25 00:59:26.659: INFO: Waiting up to 5m0s for pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879" in namespace "secrets-5432" to be "success or failure"
Jan 25 00:59:26.685: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Pending", Reason="", readiness=false. Elapsed: 26.356698ms
Jan 25 00:59:28.694: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034893885s
Jan 25 00:59:30.701: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041817089s
Jan 25 00:59:32.705: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046571508s
Jan 25 00:59:34.709: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Running", Reason="", readiness=true. Elapsed: 8.050420998s
Jan 25 00:59:36.725: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066010666s
STEP: Saw pod success
Jan 25 00:59:36.725: INFO: Pod "pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879" satisfied condition "success or failure"
Jan 25 00:59:36.729: INFO: Trying to get logs from node jerma-node pod pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879 container secret-volume-test: 
STEP: delete the pod
Jan 25 00:59:36.771: INFO: Waiting for pod pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879 to disappear
Jan 25 00:59:36.827: INFO: Pod pod-secrets-cb927537-9d8e-41a7-b44f-6fb3d21f2879 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:36.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5432" for this suite.

• [SLOW TEST:10.310 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3355,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:36.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 25 00:59:36.914: INFO: Waiting up to 5m0s for pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d" in namespace "emptydir-5572" to be "success or failure"
Jan 25 00:59:36.957: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d": Phase="Pending", Reason="", readiness=false. Elapsed: 43.321018ms
Jan 25 00:59:38.963: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049247073s
Jan 25 00:59:40.968: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053914458s
Jan 25 00:59:42.979: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065278394s
Jan 25 00:59:44.989: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074839814s
STEP: Saw pod success
Jan 25 00:59:44.989: INFO: Pod "pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d" satisfied condition "success or failure"
Jan 25 00:59:44.992: INFO: Trying to get logs from node jerma-node pod pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d container test-container: 
STEP: delete the pod
Jan 25 00:59:45.340: INFO: Waiting for pod pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d to disappear
Jan 25 00:59:45.347: INFO: Pod pod-9b7c160c-a32b-4334-bd8e-5831a8ef063d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 00:59:45.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5572" for this suite.

• [SLOW TEST:8.521 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3374,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 00:59:45.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 25 01:00:01.600: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:01.631: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:03.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:03.644: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:05.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:05.638: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:07.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:07.639: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:09.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:09.637: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:11.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:11.635: INFO: Pod pod-with-poststart-http-hook still exists
Jan 25 01:00:13.631: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 25 01:00:13.639: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:00:13.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-974" for this suite.

• [SLOW TEST:28.294 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3402,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:00:13.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:00:13.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b" in namespace "downward-api-9496" to be "success or failure"
Jan 25 01:00:13.808: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.426527ms
Jan 25 01:00:15.816: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023802682s
Jan 25 01:00:17.837: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045363321s
Jan 25 01:00:19.845: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053018674s
Jan 25 01:00:21.852: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060589789s
STEP: Saw pod success
Jan 25 01:00:21.852: INFO: Pod "downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b" satisfied condition "success or failure"
Jan 25 01:00:21.855: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b container client-container: 
STEP: delete the pod
Jan 25 01:00:21.952: INFO: Waiting for pod downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b to disappear
Jan 25 01:00:21.965: INFO: Pod downwardapi-volume-295ccad8-9c4c-4705-a751-cfa8c61c421b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:00:21.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9496" for this suite.

• [SLOW TEST:8.382 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3407,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:00:22.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 01:00:22.285: INFO: Waiting up to 5m0s for pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156" in namespace "downward-api-6625" to be "success or failure"
Jan 25 01:00:22.306: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Pending", Reason="", readiness=false. Elapsed: 20.520678ms
Jan 25 01:00:24.313: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027258339s
Jan 25 01:00:26.334: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048843924s
Jan 25 01:00:28.339: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053337969s
Jan 25 01:00:30.348: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062296714s
Jan 25 01:00:32.353: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067459854s
STEP: Saw pod success
Jan 25 01:00:32.353: INFO: Pod "downward-api-c2bc560a-3af3-4815-a466-3072b6d72156" satisfied condition "success or failure"
Jan 25 01:00:32.356: INFO: Trying to get logs from node jerma-node pod downward-api-c2bc560a-3af3-4815-a466-3072b6d72156 container dapi-container: 
STEP: delete the pod
Jan 25 01:00:32.479: INFO: Waiting for pod downward-api-c2bc560a-3af3-4815-a466-3072b6d72156 to disappear
Jan 25 01:00:32.484: INFO: Pod downward-api-c2bc560a-3af3-4815-a466-3072b6d72156 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:00:32.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6625" for this suite.

• [SLOW TEST:10.451 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3413,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:00:32.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:00:32.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126" in namespace "projected-9873" to be "success or failure"
Jan 25 01:00:32.675: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126": Phase="Pending", Reason="", readiness=false. Elapsed: 15.102965ms
Jan 25 01:00:34.680: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019562502s
Jan 25 01:00:36.686: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026059362s
Jan 25 01:00:38.695: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035313728s
Jan 25 01:00:40.701: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041097655s
STEP: Saw pod success
Jan 25 01:00:40.701: INFO: Pod "downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126" satisfied condition "success or failure"
Jan 25 01:00:40.706: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126 container client-container: 
STEP: delete the pod
Jan 25 01:00:40.837: INFO: Waiting for pod downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126 to disappear
Jan 25 01:00:40.844: INFO: Pod downwardapi-volume-9a5b6a0b-8ea4-413b-a52f-f19c034d6126 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:00:40.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9873" for this suite.

• [SLOW TEST:8.369 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:00:40.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-9187
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9187 to expose endpoints map[]
Jan 25 01:00:41.204: INFO: successfully validated that service endpoint-test2 in namespace services-9187 exposes endpoints map[] (180.395753ms elapsed)
STEP: Creating pod pod1 in namespace services-9187
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9187 to expose endpoints map[pod1:[80]]
Jan 25 01:00:45.579: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.35149042s elapsed, will retry)
Jan 25 01:00:48.611: INFO: successfully validated that service endpoint-test2 in namespace services-9187 exposes endpoints map[pod1:[80]] (7.383314849s elapsed)
STEP: Creating pod pod2 in namespace services-9187
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9187 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 25 01:00:53.084: INFO: Unexpected endpoints: found map[52d56d62-369d-433b-8231-e222dae24736:[80]], expected map[pod1:[80] pod2:[80]] (4.465409048s elapsed, will retry)
Jan 25 01:00:56.604: INFO: successfully validated that service endpoint-test2 in namespace services-9187 exposes endpoints map[pod1:[80] pod2:[80]] (7.985650237s elapsed)
STEP: Deleting pod pod1 in namespace services-9187
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9187 to expose endpoints map[pod2:[80]]
Jan 25 01:00:56.664: INFO: successfully validated that service endpoint-test2 in namespace services-9187 exposes endpoints map[pod2:[80]] (24.182601ms elapsed)
STEP: Deleting pod pod2 in namespace services-9187
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9187 to expose endpoints map[]
Jan 25 01:00:56.694: INFO: successfully validated that service endpoint-test2 in namespace services-9187 exposes endpoints map[] (24.795687ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:00:56.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9187" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:15.924 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":211,"skipped":3453,"failed":0}
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:00:56.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 25 01:00:56.910: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 01:00:56.934: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 01:00:56.937: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 01:00:56.946: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 01:00:56.946: INFO: 	Container weave ready: true, restart count 1
Jan 25 01:00:56.946: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 01:00:56.946: INFO: pod1 from services-9187 started at 2020-01-25 01:00:41 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.946: INFO: 	Container pause ready: true, restart count 0
Jan 25 01:00:56.946: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.946: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 01:00:56.946: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 01:00:56.972: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 01:00:56.972: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 01:00:56.972: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container weave ready: true, restart count 0
Jan 25 01:00:56.972: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 01:00:56.972: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 25 01:00:56.972: INFO: pod2 from services-9187 started at 2020-01-25 01:00:48 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container pause ready: true, restart count 0
Jan 25 01:00:56.972: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 01:00:56.972: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container etcd ready: true, restart count 1
Jan 25 01:00:56.972: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container coredns ready: true, restart count 0
Jan 25 01:00:56.972: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 01:00:56.972: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-a8bf34db-b809-4e42-b40b-3874fdee2c01 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-a8bf34db-b809-4e42-b40b-3874fdee2c01 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-a8bf34db-b809-4e42-b40b-3874fdee2c01
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:06:15.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1441" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:318.535 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":212,"skipped":3453,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:06:15.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:06:16.434: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:06:18.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:06:20.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:06:22.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511176, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:06:25.519: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 25 01:06:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-313 to-be-attached-pod -i -c=container1'
Jan 25 01:06:33.766: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:06:33.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-313" for this suite.
STEP: Destroying namespace "webhook-313-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:18.736 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":213,"skipped":3475,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:06:34.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-7b4f74e3-a2af-42ef-b8a2-896a810570c3
STEP: Creating a pod to test consume configMaps
Jan 25 01:06:34.454: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae" in namespace "projected-5053" to be "success or failure"
Jan 25 01:06:34.584: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 129.14931ms
Jan 25 01:06:36.596: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141607834s
Jan 25 01:06:38.615: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160102438s
Jan 25 01:06:40.621: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166966522s
Jan 25 01:06:42.629: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174601622s
Jan 25 01:06:44.651: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.1962408s
Jan 25 01:06:46.661: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.206474122s
STEP: Saw pod success
Jan 25 01:06:46.661: INFO: Pod "pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae" satisfied condition "success or failure"
Jan 25 01:06:46.665: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 01:06:46.744: INFO: Waiting for pod pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae to disappear
Jan 25 01:06:46.752: INFO: Pod pod-projected-configmaps-f648244a-71dc-4c62-9c8d-25db1bad86ae no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:06:46.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5053" for this suite.

• [SLOW TEST:12.702 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3501,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:06:46.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1465
STEP: creating an pod
Jan 25 01:06:46.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5241 -- logs-generator --log-lines-total 100 --run-duration 20s'
Jan 25 01:06:47.001: INFO: stderr: ""
Jan 25 01:06:47.001: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Jan 25 01:06:47.001: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jan 25 01:06:47.001: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5241" to be "running and ready, or succeeded"
Jan 25 01:06:47.144: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 142.418197ms
Jan 25 01:06:49.150: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14897035s
Jan 25 01:06:51.156: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154841509s
Jan 25 01:06:53.184: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182899513s
Jan 25 01:06:55.239: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.237274964s
Jan 25 01:06:55.239: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jan 25 01:06:55.239: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jan 25 01:06:55.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241'
Jan 25 01:06:55.417: INFO: stderr: ""
Jan 25 01:06:55.417: INFO: stdout: "I0125 01:06:53.236624       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/qq5 589\nI0125 01:06:53.436964       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/dn8k 568\nI0125 01:06:53.637188       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/vc7 446\nI0125 01:06:53.837095       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/nhrn 355\nI0125 01:06:54.037611       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/ncx 474\nI0125 01:06:54.237023       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/wgg 533\nI0125 01:06:54.437100       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/s77d 321\nI0125 01:06:54.637114       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/gg4 481\nI0125 01:06:54.836999       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/759p 322\nI0125 01:06:55.037113       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/z4sb 342\nI0125 01:06:55.237195       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/dbpk 284\n"
STEP: limiting log lines
Jan 25 01:06:55.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241 --tail=1'
Jan 25 01:06:55.612: INFO: stderr: ""
Jan 25 01:06:55.612: INFO: stdout: "I0125 01:06:55.436911       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/h8j 315\n"
Jan 25 01:06:55.612: INFO: got output "I0125 01:06:55.436911       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/h8j 315\n"
STEP: limiting log bytes
Jan 25 01:06:55.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241 --limit-bytes=1'
Jan 25 01:06:55.710: INFO: stderr: ""
Jan 25 01:06:55.710: INFO: stdout: "I"
Jan 25 01:06:55.710: INFO: got output "I"
STEP: exposing timestamps
Jan 25 01:06:55.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241 --tail=1 --timestamps'
Jan 25 01:06:55.855: INFO: stderr: ""
Jan 25 01:06:55.855: INFO: stdout: "2020-01-25T01:06:55.837117973Z I0125 01:06:55.836815       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/tfsh 452\n"
Jan 25 01:06:55.855: INFO: got output "2020-01-25T01:06:55.837117973Z I0125 01:06:55.836815       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/tfsh 452\n"
STEP: restricting to a time range
Jan 25 01:06:58.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241 --since=1s'
Jan 25 01:06:58.613: INFO: stderr: ""
Jan 25 01:06:58.613: INFO: stdout: "I0125 01:06:57.637045       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/kcf 463\nI0125 01:06:57.837443       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/hk4w 519\nI0125 01:06:58.037120       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/nx6 454\nI0125 01:06:58.236969       1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/tbsk 277\nI0125 01:06:58.436927       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/hbpx 244\n"
Jan 25 01:06:58.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5241 --since=24h'
Jan 25 01:06:58.760: INFO: stderr: ""
Jan 25 01:06:58.760: INFO: stdout: "I0125 01:06:53.236624       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/qq5 589\nI0125 01:06:53.436964       1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/dn8k 568\nI0125 01:06:53.637188       1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/vc7 446\nI0125 01:06:53.837095       1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/nhrn 355\nI0125 01:06:54.037611       1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/ncx 474\nI0125 01:06:54.237023       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/wgg 533\nI0125 01:06:54.437100       1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/s77d 321\nI0125 01:06:54.637114       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/gg4 481\nI0125 01:06:54.836999       1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/759p 322\nI0125 01:06:55.037113       1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/z4sb 342\nI0125 01:06:55.237195       1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/dbpk 284\nI0125 01:06:55.436911       1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/h8j 315\nI0125 01:06:55.636951       1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/hd5 519\nI0125 01:06:55.836815       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/tfsh 452\nI0125 01:06:56.037479       1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/z64 347\nI0125 01:06:56.236914       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/6vpm 353\nI0125 01:06:56.436991       1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/cf9 554\nI0125 01:06:56.636935       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/v5w 269\nI0125 01:06:56.836972       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/ft7d 358\nI0125 01:06:57.037138       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/2fh 486\nI0125 01:06:57.237869       1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/qhc5 435\nI0125 01:06:57.437134       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/q86 432\nI0125 01:06:57.637045       1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/kcf 463\nI0125 01:06:57.837443       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/hk4w 519\nI0125 01:06:58.037120       1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/nx6 454\nI0125 01:06:58.236969       1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/tbsk 277\nI0125 01:06:58.436927       1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/hbpx 244\nI0125 01:06:58.636908       1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/x5p 301\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1471
Jan 25 01:06:58.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5241'
Jan 25 01:07:12.379: INFO: stderr: ""
Jan 25 01:07:12.379: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:07:12.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5241" for this suite.

• [SLOW TEST:25.640 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":215,"skipped":3502,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:07:12.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:07:12.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 25 01:07:12.616: INFO: stderr: ""
Jan 25 01:07:12.616: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.1.106+4f70231ce7736c\", GitCommit:\"4f70231ce7736cc748f76526c98955f86c667a41\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T17:08:54Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:07:12.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2774" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":216,"skipped":3539,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:07:12.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-aceb137d-9271-4860-be11-f2d810c639ed in namespace container-probe-7741
Jan 25 01:07:20.802: INFO: Started pod test-webserver-aceb137d-9271-4860-be11-f2d810c639ed in namespace container-probe-7741
STEP: checking the pod's current state and verifying that restartCount is present
Jan 25 01:07:20.806: INFO: Initial restart count of pod test-webserver-aceb137d-9271-4860-be11-f2d810c639ed is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:11:22.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7741" for this suite.

• [SLOW TEST:249.633 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:11:22.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:11:22.405: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 25 01:11:25.255: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:11:25.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6440" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":218,"skipped":3578,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:11:25.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:11:26.885: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 25 01:11:28.960: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:31.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:33.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:34.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:37.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:39.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:41.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511487, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511486, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:11:43.992: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:11:44.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:11:46.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-9380" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:20.772 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":219,"skipped":3602,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:11:46.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:11:47.637: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:11:49.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:51.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:53.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:11:55.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511507, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:11:58.698: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:11:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7664" for this suite.
STEP: Destroying namespace "webhook-7664-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.151 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":220,"skipped":3639,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:11:59.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:12:00.272: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:12:02.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:12:04.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:12:06.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:12:08.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511520, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:12:11.340: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 25 01:12:11.374: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:12:11.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9040" for this suite.
STEP: Destroying namespace "webhook-9040-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.964 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":221,"skipped":3659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:12:11.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0125 01:12:21.794158       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 01:12:21.794: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:12:21.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9679" for this suite.

• [SLOW TEST:10.214 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":222,"skipped":3703,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:12:21.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Jan 25 01:12:21.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9483'
Jan 25 01:12:24.975: INFO: stderr: ""
Jan 25 01:12:24.975: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 01:12:24.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9483'
Jan 25 01:12:25.249: INFO: stderr: ""
Jan 25 01:12:25.249: INFO: stdout: "update-demo-nautilus-lzpzw update-demo-nautilus-srr9c "
Jan 25 01:12:25.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzpzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:25.343: INFO: stderr: ""
Jan 25 01:12:25.344: INFO: stdout: ""
Jan 25 01:12:25.344: INFO: update-demo-nautilus-lzpzw is created but not running
Jan 25 01:12:30.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9483'
Jan 25 01:12:31.526: INFO: stderr: ""
Jan 25 01:12:31.526: INFO: stdout: "update-demo-nautilus-lzpzw update-demo-nautilus-srr9c "
Jan 25 01:12:31.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzpzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:32.489: INFO: stderr: ""
Jan 25 01:12:32.489: INFO: stdout: ""
Jan 25 01:12:32.489: INFO: update-demo-nautilus-lzpzw is created but not running
Jan 25 01:12:37.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9483'
Jan 25 01:12:37.664: INFO: stderr: ""
Jan 25 01:12:37.664: INFO: stdout: "update-demo-nautilus-lzpzw update-demo-nautilus-srr9c "
Jan 25 01:12:37.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzpzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:37.766: INFO: stderr: ""
Jan 25 01:12:37.766: INFO: stdout: "true"
Jan 25 01:12:37.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lzpzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:37.902: INFO: stderr: ""
Jan 25 01:12:37.902: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 01:12:37.902: INFO: validating pod update-demo-nautilus-lzpzw
Jan 25 01:12:37.918: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 01:12:37.919: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 01:12:37.919: INFO: update-demo-nautilus-lzpzw is verified up and running
Jan 25 01:12:37.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr9c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:38.031: INFO: stderr: ""
Jan 25 01:12:38.031: INFO: stdout: "true"
Jan 25 01:12:38.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srr9c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:12:38.124: INFO: stderr: ""
Jan 25 01:12:38.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 01:12:38.124: INFO: validating pod update-demo-nautilus-srr9c
Jan 25 01:12:38.132: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 01:12:38.132: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 01:12:38.132: INFO: update-demo-nautilus-srr9c is verified up and running
STEP: rolling-update to new replication controller
Jan 25 01:12:38.135: INFO: scanned /root for discovery docs: 
Jan 25 01:12:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9483'
Jan 25 01:13:08.689: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 25 01:13:08.689: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 01:13:08.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9483'
Jan 25 01:13:08.889: INFO: stderr: ""
Jan 25 01:13:08.889: INFO: stdout: "update-demo-kitten-b7vvw update-demo-kitten-sbt2l "
Jan 25 01:13:08.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b7vvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:13:09.000: INFO: stderr: ""
Jan 25 01:13:09.000: INFO: stdout: "true"
Jan 25 01:13:09.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b7vvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:13:09.092: INFO: stderr: ""
Jan 25 01:13:09.092: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 01:13:09.092: INFO: validating pod update-demo-kitten-b7vvw
Jan 25 01:13:09.102: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 01:13:09.102: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 01:13:09.102: INFO: update-demo-kitten-b7vvw is verified up and running
Jan 25 01:13:09.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sbt2l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:13:09.221: INFO: stderr: ""
Jan 25 01:13:09.221: INFO: stdout: "true"
Jan 25 01:13:09.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sbt2l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9483'
Jan 25 01:13:09.343: INFO: stderr: ""
Jan 25 01:13:09.343: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 25 01:13:09.343: INFO: validating pod update-demo-kitten-sbt2l
Jan 25 01:13:09.404: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 25 01:13:09.404: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 25 01:13:09.404: INFO: update-demo-kitten-sbt2l is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:13:09.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9483" for this suite.

• [SLOW TEST:47.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":223,"skipped":3723,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:13:09.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 25 01:13:21.064: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:13:21.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6835" for this suite.

• [SLOW TEST:11.713 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:13:21.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 25 01:13:21.214: INFO: Waiting up to 5m0s for pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770" in namespace "emptydir-5644" to be "success or failure"
Jan 25 01:13:21.295: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770": Phase="Pending", Reason="", readiness=false. Elapsed: 80.788349ms
Jan 25 01:13:23.301: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086521789s
Jan 25 01:13:25.315: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100222687s
Jan 25 01:13:27.360: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146110197s
Jan 25 01:13:29.369: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155077033s
STEP: Saw pod success
Jan 25 01:13:29.370: INFO: Pod "pod-464217a1-3fbf-4158-b4af-9c2bd1b88770" satisfied condition "success or failure"
Jan 25 01:13:29.398: INFO: Trying to get logs from node jerma-node pod pod-464217a1-3fbf-4158-b4af-9c2bd1b88770 container test-container: 
STEP: delete the pod
Jan 25 01:13:29.505: INFO: Waiting for pod pod-464217a1-3fbf-4158-b4af-9c2bd1b88770 to disappear
Jan 25 01:13:29.561: INFO: Pod pod-464217a1-3fbf-4158-b4af-9c2bd1b88770 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:13:29.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5644" for this suite.

• [SLOW TEST:8.470 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3772,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:13:29.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:13:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-801" for this suite.

• [SLOW TEST:16.906 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":226,"skipped":3774,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:13:46.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:13:57.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3004" for this suite.

• [SLOW TEST:11.262 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":227,"skipped":3782,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:13:57.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-206bfbbd-9650-4e5b-a979-30b27ebaa4c0
STEP: Creating a pod to test consume secrets
Jan 25 01:13:57.922: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca" in namespace "projected-214" to be "success or failure"
Jan 25 01:13:57.950: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 27.482617ms
Jan 25 01:13:59.957: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03483845s
Jan 25 01:14:01.963: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041282096s
Jan 25 01:14:03.969: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046692477s
Jan 25 01:14:05.975: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05249091s
STEP: Saw pod success
Jan 25 01:14:05.975: INFO: Pod "pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca" satisfied condition "success or failure"
Jan 25 01:14:05.977: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca container projected-secret-volume-test: 
STEP: delete the pod
Jan 25 01:14:06.157: INFO: Waiting for pod pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca to disappear
Jan 25 01:14:06.171: INFO: Pod pod-projected-secrets-0296ee08-2c8a-4f5d-a608-b9a9149fd6ca no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:06.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-214" for this suite.

• [SLOW TEST:8.410 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3793,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:06.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:14:07.257: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:14:09.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:14:11.296: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:14:13.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511647, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:14:16.376: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:16.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3185" for this suite.
STEP: Destroying namespace "webhook-3185-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.825 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":229,"skipped":3796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:17.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 25 01:14:17.505: INFO: Waiting up to 5m0s for pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5" in namespace "emptydir-6323" to be "success or failure"
Jan 25 01:14:17.617: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 112.659588ms
Jan 25 01:14:19.630: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124992704s
Jan 25 01:14:21.642: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137108014s
Jan 25 01:14:23.647: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142600589s
Jan 25 01:14:25.655: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150146311s
Jan 25 01:14:27.694: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189289675s
Jan 25 01:14:29.772: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.266952983s
STEP: Saw pod success
Jan 25 01:14:29.772: INFO: Pod "pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5" satisfied condition "success or failure"
Jan 25 01:14:29.778: INFO: Trying to get logs from node jerma-node pod pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5 container test-container: 
STEP: delete the pod
Jan 25 01:14:29.928: INFO: Waiting for pod pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5 to disappear
Jan 25 01:14:29.940: INFO: Pod pod-fb9e03bc-7a14-422e-bbe9-8d5be5dac8c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:29.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6323" for this suite.

• [SLOW TEST:12.942 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3835,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:29.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 25 01:14:30.073: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:39.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5367" for this suite.

• [SLOW TEST:10.067 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":231,"skipped":3852,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:40.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0125 01:14:43.083738       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 25 01:14:43.083: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:43.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8207" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":232,"skipped":3875,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:43.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:14:44.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:14:47.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:14:49.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:14:51.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:14:53.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511684, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:14:56.966: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:14:57.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8999" for this suite.
STEP: Destroying namespace "webhook-8999-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.367 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":233,"skipped":3877,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:14:57.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:14:57.709: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b4b7dfce-dec5-4fd9-b3c5-705e4d75d350", Controller:(*bool)(0xc00522cc4a), BlockOwnerDeletion:(*bool)(0xc00522cc4b)}}
Jan 25 01:14:57.723: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ca32c250-1892-4784-9cc0-c31ed38c1bf9", Controller:(*bool)(0xc0047fe1ba), BlockOwnerDeletion:(*bool)(0xc0047fe1bb)}}
Jan 25 01:14:57.739: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f4f8f681-2eaf-46be-81c1-a47f6edd3fe0", Controller:(*bool)(0xc00522cdda), BlockOwnerDeletion:(*bool)(0xc00522cddb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:02.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5698" for this suite.

• [SLOW TEST:5.398 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":234,"skipped":3878,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:02.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 25 01:15:03.094: INFO: Waiting up to 5m0s for pod "pod-000ce610-4082-458b-88c5-5fff55e06822" in namespace "emptydir-6306" to be "success or failure"
Jan 25 01:15:03.199: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Pending", Reason="", readiness=false. Elapsed: 104.685914ms
Jan 25 01:15:05.206: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111693399s
Jan 25 01:15:07.212: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117680008s
Jan 25 01:15:09.221: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12667052s
Jan 25 01:15:11.227: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132621604s
Jan 25 01:15:13.232: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137788562s
STEP: Saw pod success
Jan 25 01:15:13.232: INFO: Pod "pod-000ce610-4082-458b-88c5-5fff55e06822" satisfied condition "success or failure"
Jan 25 01:15:13.235: INFO: Trying to get logs from node jerma-node pod pod-000ce610-4082-458b-88c5-5fff55e06822 container test-container: 
STEP: delete the pod
Jan 25 01:15:13.272: INFO: Waiting for pod pod-000ce610-4082-458b-88c5-5fff55e06822 to disappear
Jan 25 01:15:13.388: INFO: Pod pod-000ce610-4082-458b-88c5-5fff55e06822 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:13.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6306" for this suite.

• [SLOW TEST:10.541 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3880,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:13.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-464e2b5e-d903-4d27-af9e-a0f6bf5d871f
STEP: Creating a pod to test consume configMaps
Jan 25 01:15:13.585: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8" in namespace "configmap-1823" to be "success or failure"
Jan 25 01:15:13.681: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Pending", Reason="", readiness=false. Elapsed: 95.404369ms
Jan 25 01:15:15.688: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102624629s
Jan 25 01:15:17.697: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112022516s
Jan 25 01:15:19.705: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119446415s
Jan 25 01:15:21.713: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127877943s
Jan 25 01:15:23.721: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.135214822s
STEP: Saw pod success
Jan 25 01:15:23.721: INFO: Pod "pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8" satisfied condition "success or failure"
Jan 25 01:15:23.725: INFO: Trying to get logs from node jerma-node pod pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8 container configmap-volume-test: 
STEP: delete the pod
Jan 25 01:15:23.805: INFO: Waiting for pod pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8 to disappear
Jan 25 01:15:23.816: INFO: Pod pod-configmaps-0c8c8670-dcb3-4870-a7ad-983a8d52e6d8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:23.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1823" for this suite.

• [SLOW TEST:10.512 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3886,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:23.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:31.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6948" for this suite.

• [SLOW TEST:7.129 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":237,"skipped":3899,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:31.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:15:31.193: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 16.074715ms)
Jan 25 01:15:31.204: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.514517ms)
Jan 25 01:15:31.218: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 14.735926ms)
Jan 25 01:15:31.225: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.398438ms)
Jan 25 01:15:31.231: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.056133ms)
Jan 25 01:15:31.236: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.788147ms)
Jan 25 01:15:31.243: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.992678ms)
Jan 25 01:15:31.247: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.466908ms)
Jan 25 01:15:31.252: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.416628ms)
Jan 25 01:15:31.256: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.272677ms)
Jan 25 01:15:31.261: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.458502ms)
Jan 25 01:15:31.264: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.648156ms)
Jan 25 01:15:31.269: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.704272ms)
Jan 25 01:15:31.273: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.567885ms)
Jan 25 01:15:31.307: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 33.930422ms)
Jan 25 01:15:31.315: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 8.389161ms)
Jan 25 01:15:31.320: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.167734ms)
Jan 25 01:15:31.341: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 20.923698ms)
Jan 25 01:15:31.351: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 10.040516ms)
Jan 25 01:15:31.357: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.07608ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-781" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":238,"skipped":3903,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:31.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-58f8f125-20b6-404c-ae33-d71364d3b009
STEP: Creating secret with name s-test-opt-upd-3a06de1b-dfdc-4eae-933d-8c0ee074b8f7
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-58f8f125-20b6-404c-ae33-d71364d3b009
STEP: Updating secret s-test-opt-upd-3a06de1b-dfdc-4eae-933d-8c0ee074b8f7
STEP: Creating secret with name s-test-opt-create-37a58e41-9484-4a33-89e5-c7d9ebb0a82e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:45.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6028" for this suite.

• [SLOW TEST:14.490 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3913,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:45.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Jan 25 01:15:46.014: INFO: Waiting up to 5m0s for pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a" in namespace "var-expansion-4066" to be "success or failure"
Jan 25 01:15:46.035: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.60038ms
Jan 25 01:15:48.041: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027316386s
Jan 25 01:15:50.048: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033730585s
Jan 25 01:15:52.087: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073277701s
Jan 25 01:15:54.839: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824738621s
Jan 25 01:15:56.864: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.850431998s
Jan 25 01:15:58.871: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.857105603s
STEP: Saw pod success
Jan 25 01:15:58.871: INFO: Pod "var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a" satisfied condition "success or failure"
Jan 25 01:15:58.875: INFO: Trying to get logs from node jerma-node pod var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a container dapi-container: 
STEP: delete the pod
Jan 25 01:15:59.329: INFO: Waiting for pod var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a to disappear
Jan 25 01:15:59.343: INFO: Pod var-expansion-0a916094-796b-4885-b6bb-c02c8482c25a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:15:59.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4066" for this suite.

• [SLOW TEST:13.498 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3972,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:15:59.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:15:59.647: INFO: Waiting up to 5m0s for pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95" in namespace "projected-2702" to be "success or failure"
Jan 25 01:15:59.682: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95": Phase="Pending", Reason="", readiness=false. Elapsed: 34.43416ms
Jan 25 01:16:01.692: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04498806s
Jan 25 01:16:03.703: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055441896s
Jan 25 01:16:05.714: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066467057s
Jan 25 01:16:07.721: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074057307s
STEP: Saw pod success
Jan 25 01:16:07.721: INFO: Pod "downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95" satisfied condition "success or failure"
Jan 25 01:16:07.727: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95 container client-container: 
STEP: delete the pod
Jan 25 01:16:08.037: INFO: Waiting for pod downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95 to disappear
Jan 25 01:16:08.045: INFO: Pod downwardapi-volume-138c3b27-1d21-498b-89e1-f6b72d2ade95 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:16:08.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2702" for this suite.

• [SLOW TEST:8.703 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4011,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:16:08.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7844
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7844
STEP: creating replication controller externalsvc in namespace services-7844
I0125 01:16:08.468194       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7844, replica count: 2
I0125 01:16:11.519127       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 01:16:14.519509       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 01:16:17.519912       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0125 01:16:20.520273       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jan 25 01:16:20.615: INFO: Creating new exec pod
Jan 25 01:16:28.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7844 execpodl4khj -- /bin/sh -x -c nslookup nodeport-service'
Jan 25 01:16:29.069: INFO: stderr: "I0125 01:16:28.889592    4035 log.go:172] (0xc000b14630) (0xc0007b3ea0) Create stream\nI0125 01:16:28.889730    4035 log.go:172] (0xc000b14630) (0xc0007b3ea0) Stream added, broadcasting: 1\nI0125 01:16:28.893664    4035 log.go:172] (0xc000b14630) Reply frame received for 1\nI0125 01:16:28.893749    4035 log.go:172] (0xc000b14630) (0xc00067a780) Create stream\nI0125 01:16:28.893767    4035 log.go:172] (0xc000b14630) (0xc00067a780) Stream added, broadcasting: 3\nI0125 01:16:28.895251    4035 log.go:172] (0xc000b14630) Reply frame received for 3\nI0125 01:16:28.895282    4035 log.go:172] (0xc000b14630) (0xc00057f400) Create stream\nI0125 01:16:28.895297    4035 log.go:172] (0xc000b14630) (0xc00057f400) Stream added, broadcasting: 5\nI0125 01:16:28.899090    4035 log.go:172] (0xc000b14630) Reply frame received for 5\nI0125 01:16:28.977233    4035 log.go:172] (0xc000b14630) Data frame received for 5\nI0125 01:16:28.977361    4035 log.go:172] (0xc00057f400) (5) Data frame handling\nI0125 01:16:28.977412    4035 log.go:172] (0xc00057f400) (5) Data frame sent\n+ nslookup nodeport-service\nI0125 01:16:28.990139    4035 log.go:172] (0xc000b14630) Data frame received for 3\nI0125 01:16:28.990188    4035 log.go:172] (0xc00067a780) (3) Data frame handling\nI0125 01:16:28.990206    4035 log.go:172] (0xc00067a780) (3) Data frame sent\nI0125 01:16:28.993969    4035 log.go:172] (0xc000b14630) Data frame received for 3\nI0125 01:16:28.993985    4035 log.go:172] (0xc00067a780) (3) Data frame handling\nI0125 01:16:28.994000    4035 log.go:172] (0xc00067a780) (3) Data frame sent\nI0125 01:16:29.057053    4035 log.go:172] (0xc000b14630) (0xc00067a780) Stream removed, broadcasting: 3\nI0125 01:16:29.057383    4035 log.go:172] (0xc000b14630) Data frame received for 1\nI0125 01:16:29.057400    4035 log.go:172] (0xc0007b3ea0) (1) Data frame handling\nI0125 01:16:29.057413    4035 log.go:172] (0xc0007b3ea0) (1) Data frame sent\nI0125 01:16:29.057422    4035 log.go:172] (0xc000b14630) (0xc0007b3ea0) Stream removed, broadcasting: 1\nI0125 01:16:29.057652    4035 log.go:172] (0xc000b14630) (0xc00057f400) Stream removed, broadcasting: 5\nI0125 01:16:29.057694    4035 log.go:172] (0xc000b14630) Go away received\nI0125 01:16:29.058706    4035 log.go:172] (0xc000b14630) (0xc0007b3ea0) Stream removed, broadcasting: 1\nI0125 01:16:29.058722    4035 log.go:172] (0xc000b14630) (0xc00067a780) Stream removed, broadcasting: 3\nI0125 01:16:29.058729    4035 log.go:172] (0xc000b14630) (0xc00057f400) Stream removed, broadcasting: 5\n"
Jan 25 01:16:29.069: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7844.svc.cluster.local\tcanonical name = externalsvc.services-7844.svc.cluster.local.\nName:\texternalsvc.services-7844.svc.cluster.local\nAddress: 10.96.137.83\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7844, will wait for the garbage collector to delete the pods
Jan 25 01:16:29.130: INFO: Deleting ReplicationController externalsvc took: 6.200694ms
Jan 25 01:16:29.530: INFO: Terminating ReplicationController externalsvc pods took: 400.360963ms
Jan 25 01:16:43.181: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:16:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7844" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:35.175 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":242,"skipped":4024,"failed":0}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:16:43.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 25 01:16:43.314: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 25 01:16:43.329: INFO: Waiting for terminating namespaces to be deleted...
Jan 25 01:16:43.348: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 25 01:16:43.355: INFO: execpodl4khj from services-7844 started at 2020-01-25 01:16:20 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.355: INFO: 	Container agnhost-pause ready: true, restart count 0
Jan 25 01:16:43.355: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.355: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 01:16:43.355: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 25 01:16:43.355: INFO: 	Container weave ready: true, restart count 1
Jan 25 01:16:43.355: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 01:16:43.355: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 25 01:16:43.377: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 25 01:16:43.377: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 25 01:16:43.377: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container weave ready: true, restart count 0
Jan 25 01:16:43.377: INFO: 	Container weave-npc ready: true, restart count 0
Jan 25 01:16:43.377: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 25 01:16:43.377: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 25 01:16:43.377: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container etcd ready: true, restart count 1
Jan 25 01:16:43.377: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container coredns ready: true, restart count 0
Jan 25 01:16:43.377: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 25 01:16:43.377: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ecfc5328ca94bb], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:16:44.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5573" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":243,"skipped":4031,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:16:44.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:16:44.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b" in namespace "downward-api-80" to be "success or failure"
Jan 25 01:16:44.566: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.47294ms
Jan 25 01:16:46.578: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032712246s
Jan 25 01:16:48.617: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072314496s
Jan 25 01:16:50.623: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078541289s
Jan 25 01:16:52.630: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084734351s
Jan 25 01:16:54.644: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099040001s
STEP: Saw pod success
Jan 25 01:16:54.644: INFO: Pod "downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b" satisfied condition "success or failure"
Jan 25 01:16:54.649: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b container client-container: 
STEP: delete the pod
Jan 25 01:16:54.714: INFO: Waiting for pod downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b to disappear
Jan 25 01:16:54.723: INFO: Pod downwardapi-volume-ccdd2971-134f-426e-ba19-06fea48f2a9b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:16:54.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-80" for this suite.

• [SLOW TEST:10.364 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4035,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:16:54.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:16:55.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4" in namespace "downward-api-9867" to be "success or failure"
Jan 25 01:16:55.199: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Pending", Reason="", readiness=false. Elapsed: 141.275134ms
Jan 25 01:16:57.206: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147704011s
Jan 25 01:16:59.216: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157816354s
Jan 25 01:17:01.222: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163921517s
Jan 25 01:17:03.229: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170851717s
Jan 25 01:17:05.235: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176735999s
STEP: Saw pod success
Jan 25 01:17:05.235: INFO: Pod "downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4" satisfied condition "success or failure"
Jan 25 01:17:05.239: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4 container client-container: 
STEP: delete the pod
Jan 25 01:17:05.356: INFO: Waiting for pod downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4 to disappear
Jan 25 01:17:05.365: INFO: Pod downwardapi-volume-973f6e08-6806-4a97-b799-c19292a534e4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:17:05.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9867" for this suite.

• [SLOW TEST:10.595 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4037,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:17:05.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:17:05.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6518" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":246,"skipped":4062,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:17:05.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 25 01:17:05.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 25 01:17:19.262: INFO: >>> kubeConfig: /root/.kube/config
Jan 25 01:17:22.808: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:17:32.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3007" for this suite.

• [SLOW TEST:26.884 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":247,"skipped":4064,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:17:32.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-f81e8fa2-269f-4fdf-95d7-5f0f71db537e
STEP: Creating a pod to test consume configMaps
Jan 25 01:17:32.726: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772" in namespace "projected-6428" to be "success or failure"
Jan 25 01:17:32.731: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689805ms
Jan 25 01:17:34.746: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019855545s
Jan 25 01:17:36.755: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02885694s
Jan 25 01:17:38.761: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034725488s
Jan 25 01:17:40.767: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040744754s
STEP: Saw pod success
Jan 25 01:17:40.767: INFO: Pod "pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772" satisfied condition "success or failure"
Jan 25 01:17:40.778: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 01:17:40.816: INFO: Waiting for pod pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772 to disappear
Jan 25 01:17:40.833: INFO: Pod pod-projected-configmaps-d34d503f-0e35-4a55-9efe-66156438b772 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:17:40.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6428" for this suite.

• [SLOW TEST:8.249 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4079,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:17:40.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 25 01:17:41.036: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:02.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8416" for this suite.

• [SLOW TEST:21.507 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4086,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:02.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:10.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5413" for this suite.

• [SLOW TEST:8.287 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4109,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:10.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 01:18:10.866: INFO: Waiting up to 5m0s for pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f" in namespace "downward-api-6705" to be "success or failure"
Jan 25 01:18:10.920: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.280606ms
Jan 25 01:18:12.926: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060546195s
Jan 25 01:18:14.934: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068077251s
Jan 25 01:18:16.943: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07720078s
Jan 25 01:18:18.948: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082758894s
STEP: Saw pod success
Jan 25 01:18:18.949: INFO: Pod "downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f" satisfied condition "success or failure"
Jan 25 01:18:18.953: INFO: Trying to get logs from node jerma-node pod downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f container dapi-container: 
STEP: delete the pod
Jan 25 01:18:19.032: INFO: Waiting for pod downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f to disappear
Jan 25 01:18:19.048: INFO: Pod downward-api-b672b261-d9fa-4dc7-9079-dcafeff6d90f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:19.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6705" for this suite.

• [SLOW TEST:8.418 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4113,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:19.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 25 01:18:19.265: INFO: Waiting up to 5m0s for pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062" in namespace "downward-api-475" to be "success or failure"
Jan 25 01:18:19.279: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Pending", Reason="", readiness=false. Elapsed: 14.487621ms
Jan 25 01:18:21.288: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023415385s
Jan 25 01:18:23.295: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030608797s
Jan 25 01:18:25.303: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038718727s
Jan 25 01:18:27.311: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046357193s
Jan 25 01:18:29.318: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053316135s
STEP: Saw pod success
Jan 25 01:18:29.318: INFO: Pod "downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062" satisfied condition "success or failure"
Jan 25 01:18:29.322: INFO: Trying to get logs from node jerma-node pod downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062 container dapi-container: 
STEP: delete the pod
Jan 25 01:18:29.374: INFO: Waiting for pod downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062 to disappear
Jan 25 01:18:29.380: INFO: Pod downward-api-d5d77840-d500-4737-b8b1-e3ce9a2fc062 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:29.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-475" for this suite.

• [SLOW TEST:10.325 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4131,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:29.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 25 01:18:29.502: INFO: Waiting up to 5m0s for pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62" in namespace "emptydir-1709" to be "success or failure"
Jan 25 01:18:29.513: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62": Phase="Pending", Reason="", readiness=false. Elapsed: 11.729058ms
Jan 25 01:18:31.521: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019782377s
Jan 25 01:18:33.528: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026431258s
Jan 25 01:18:35.538: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03683771s
Jan 25 01:18:37.560: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058393773s
STEP: Saw pod success
Jan 25 01:18:37.560: INFO: Pod "pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62" satisfied condition "success or failure"
Jan 25 01:18:37.567: INFO: Trying to get logs from node jerma-node pod pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62 container test-container: 
STEP: delete the pod
Jan 25 01:18:37.607: INFO: Waiting for pod pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62 to disappear
Jan 25 01:18:37.692: INFO: Pod pod-5dcf7bf2-a84f-4dab-9ba8-dd1360694f62 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:37.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1709" for this suite.

• [SLOW TEST:8.321 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:37.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-9aaccaac-c61e-4315-b70c-f2d221f78e25
STEP: Creating a pod to test consume secrets
Jan 25 01:18:37.911: INFO: Waiting up to 5m0s for pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889" in namespace "secrets-8067" to be "success or failure"
Jan 25 01:18:37.918: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Pending", Reason="", readiness=false. Elapsed: 7.768568ms
Jan 25 01:18:40.027: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115890838s
Jan 25 01:18:42.031: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120632375s
Jan 25 01:18:44.036: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125466237s
Jan 25 01:18:46.042: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131487717s
Jan 25 01:18:48.048: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136916933s
STEP: Saw pod success
Jan 25 01:18:48.048: INFO: Pod "pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889" satisfied condition "success or failure"
Jan 25 01:18:48.051: INFO: Trying to get logs from node jerma-node pod pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889 container secret-volume-test: 
STEP: delete the pod
Jan 25 01:18:48.339: INFO: Waiting for pod pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889 to disappear
Jan 25 01:18:48.416: INFO: Pod pod-secrets-10e01e9a-f078-4d6e-b188-35fadc476889 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:48.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8067" for this suite.

• [SLOW TEST:10.726 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:48.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-ae02678d-0113-449a-9907-f15d68a371b9
STEP: Creating a pod to test consume configMaps
Jan 25 01:18:48.619: INFO: Waiting up to 5m0s for pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71" in namespace "configmap-5656" to be "success or failure"
Jan 25 01:18:48.632: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71": Phase="Pending", Reason="", readiness=false. Elapsed: 12.488374ms
Jan 25 01:18:50.643: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023332469s
Jan 25 01:18:52.662: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042521065s
Jan 25 01:18:54.765: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145252791s
Jan 25 01:18:56.769: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149587114s
STEP: Saw pod success
Jan 25 01:18:56.769: INFO: Pod "pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71" satisfied condition "success or failure"
Jan 25 01:18:56.773: INFO: Trying to get logs from node jerma-node pod pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71 container configmap-volume-test: 
STEP: delete the pod
Jan 25 01:18:57.053: INFO: Waiting for pod pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71 to disappear
Jan 25 01:18:57.133: INFO: Pod pod-configmaps-00bc042f-d05d-4915-972e-ae0b310f8d71 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:18:57.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5656" for this suite.

• [SLOW TEST:8.701 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4226,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:18:57.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:18:57.901: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:18:59.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:19:01.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:19:03.925: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511937, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:19:06.944: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:19:07.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-424" for this suite.
STEP: Destroying namespace "webhook-424-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.044 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":256,"skipped":4233,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:19:07.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-8e6c61e8-8f48-4274-ba44-6924fd29d1f7
STEP: Creating a pod to test consume secrets
Jan 25 01:19:07.337: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8" in namespace "projected-2313" to be "success or failure"
Jan 25 01:19:07.341: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479006ms
Jan 25 01:19:09.347: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010064257s
Jan 25 01:19:11.357: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019586874s
Jan 25 01:19:13.363: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026298576s
Jan 25 01:19:15.393: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055927329s
Jan 25 01:19:17.454: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117309785s
STEP: Saw pod success
Jan 25 01:19:17.455: INFO: Pod "pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8" satisfied condition "success or failure"
Jan 25 01:19:17.463: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8 container secret-volume-test: 
STEP: delete the pod
Jan 25 01:19:17.530: INFO: Waiting for pod pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8 to disappear
Jan 25 01:19:17.534: INFO: Pod pod-projected-secrets-3875ff85-08d4-4944-83ca-5eb5fbf863e8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:19:17.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2313" for this suite.

• [SLOW TEST:10.353 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4246,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:19:17.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:19:25.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6964" for this suite.

• [SLOW TEST:8.292 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4253,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:19:25.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 25 01:19:40.199: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 01:19:40.209: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 01:19:42.210: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 01:19:42.217: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 01:19:44.209: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 01:19:44.216: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 01:19:46.209: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 01:19:46.215: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 25 01:19:48.209: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 25 01:19:48.216: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:19:48.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7103" for this suite.

• [SLOW TEST:22.445 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4257,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:19:48.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:19:48.988: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:19:51.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511988, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:19:53.028: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511988, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:19:55.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511989, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715511988, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:19:58.037: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:19:58.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4131" for this suite.
STEP: Destroying namespace "webhook-4131-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.262 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":260,"skipped":4258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:19:58.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 25 01:19:58.730: INFO: Waiting up to 5m0s for pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e" in namespace "emptydir-5565" to be "success or failure"
Jan 25 01:19:58.737: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312542ms
Jan 25 01:20:00.811: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081020041s
Jan 25 01:20:02.822: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092039912s
Jan 25 01:20:04.832: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101654186s
Jan 25 01:20:06.838: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108287064s
Jan 25 01:20:08.856: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125292433s
Jan 25 01:20:10.862: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.131480632s
STEP: Saw pod success
Jan 25 01:20:10.862: INFO: Pod "pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e" satisfied condition "success or failure"
Jan 25 01:20:10.865: INFO: Trying to get logs from node jerma-node pod pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e container test-container: 
STEP: delete the pod
Jan 25 01:20:10.902: INFO: Waiting for pod pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e to disappear
Jan 25 01:20:10.910: INFO: Pod pod-1faee4fa-f7f1-400d-b5ce-2a9dde24334e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:20:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5565" for this suite.

• [SLOW TEST:12.373 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4281,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:20:10.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2965 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2965;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2965 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2965;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2965.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2965.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2965.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2965.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2965.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2965.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2965.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.6_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2965 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2965;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2965 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2965;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2965.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2965.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2965.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2965.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2965.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2965.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2965.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2965.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2965.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.14.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.14.6_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 25 01:20:23.108: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.117: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.187: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.193: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.198: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.203: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.208: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.214: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.248: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.256: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.265: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.281: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.286: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.294: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.298: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:23.346: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:28.358: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.364: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.380: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.388: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.392: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.426: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.430: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.433: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.438: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.442: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.446: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.451: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.455: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:28.495: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:33.357: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.363: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.383: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.387: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.418: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.422: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.425: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.427: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.430: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.548: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:33.583: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:38.356: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.362: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.371: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.388: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.422: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.425: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.429: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.433: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.440: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.445: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.451: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.460: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:38.491: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:43.361: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.369: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.381: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.419: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.426: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.433: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.470: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.478: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.481: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.485: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.490: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.496: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.501: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:43.540: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:48.355: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.362: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.372: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.385: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.389: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.427: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.433: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.437: INFO: Unable to read jessie_udp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.442: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965 from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.446: INFO: Unable to read jessie_udp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.451: INFO: Unable to read jessie_tcp@dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.454: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.460: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc from pod dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f: the server could not find the requested resource (get pods dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f)
Jan 25 01:20:48.486: INFO: Lookups using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2965 wheezy_tcp@dns-test-service.dns-2965 wheezy_udp@dns-test-service.dns-2965.svc wheezy_tcp@dns-test-service.dns-2965.svc wheezy_udp@_http._tcp.dns-test-service.dns-2965.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2965.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2965 jessie_tcp@dns-test-service.dns-2965 jessie_udp@dns-test-service.dns-2965.svc jessie_tcp@dns-test-service.dns-2965.svc jessie_udp@_http._tcp.dns-test-service.dns-2965.svc jessie_tcp@_http._tcp.dns-test-service.dns-2965.svc]

Jan 25 01:20:53.507: INFO: DNS probes using dns-2965/dns-test-5bc1ac3f-f062-4bf2-8f83-b29e797d185f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:20:53.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2965" for this suite.

• [SLOW TEST:43.111 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":262,"skipped":4284,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:20:54.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Jan 25 01:20:54.212: INFO: Waiting up to 5m0s for pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897" in namespace "containers-5908" to be "success or failure"
Jan 25 01:20:54.232: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Pending", Reason="", readiness=false. Elapsed: 19.713113ms
Jan 25 01:20:56.239: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027441921s
Jan 25 01:20:58.244: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03200744s
Jan 25 01:21:00.885: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673200614s
Jan 25 01:21:02.897: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68512077s
Jan 25 01:21:04.906: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693674486s
STEP: Saw pod success
Jan 25 01:21:04.906: INFO: Pod "client-containers-7d6913c3-3471-4a24-8470-f78445d63897" satisfied condition "success or failure"
Jan 25 01:21:04.917: INFO: Trying to get logs from node jerma-node pod client-containers-7d6913c3-3471-4a24-8470-f78445d63897 container test-container: 
STEP: delete the pod
Jan 25 01:21:04.957: INFO: Waiting for pod client-containers-7d6913c3-3471-4a24-8470-f78445d63897 to disappear
Jan 25 01:21:04.961: INFO: Pod client-containers-7d6913c3-3471-4a24-8470-f78445d63897 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5908" for this suite.

• [SLOW TEST:11.022 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4287,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:05.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-7a114f6c-efb2-49a2-9250-5558bdf7575d
STEP: Creating a pod to test consume configMaps
Jan 25 01:21:05.304: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc" in namespace "projected-7672" to be "success or failure"
Jan 25 01:21:05.385: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 81.497746ms
Jan 25 01:21:07.397: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093020243s
Jan 25 01:21:09.403: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099538051s
Jan 25 01:21:11.432: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128598475s
Jan 25 01:21:13.446: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.14251417s
STEP: Saw pod success
Jan 25 01:21:13.446: INFO: Pod "pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc" satisfied condition "success or failure"
Jan 25 01:21:13.451: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc container projected-configmap-volume-test: 
STEP: delete the pod
Jan 25 01:21:13.489: INFO: Waiting for pod pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc to disappear
Jan 25 01:21:13.528: INFO: Pod pod-projected-configmaps-cdf0a026-06ae-4bec-b1c0-34a317ec9ebc no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:13.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7672" for this suite.

• [SLOW TEST:8.529 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4315,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:13.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 25 01:21:13.723: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6" in namespace "projected-9116" to be "success or failure"
Jan 25 01:21:13.730: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.486041ms
Jan 25 01:21:15.736: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013235244s
Jan 25 01:21:17.743: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019415687s
Jan 25 01:21:19.751: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027852795s
Jan 25 01:21:21.759: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.036061404s
STEP: Saw pod success
Jan 25 01:21:21.759: INFO: Pod "downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6" satisfied condition "success or failure"
Jan 25 01:21:21.763: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6 container client-container: 
STEP: delete the pod
Jan 25 01:21:21.917: INFO: Waiting for pod downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6 to disappear
Jan 25 01:21:21.924: INFO: Pod downwardapi-volume-bff8d8ac-8659-4143-ac1e-c36a6ddb4ac6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:21.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9116" for this suite.

• [SLOW TEST:8.351 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:21.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:35.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8230" for this suite.

• [SLOW TEST:13.392 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":266,"skipped":4366,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:35.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-20846810-7234-4339-a91e-fa3bc7fa6290
STEP: Creating a pod to test consume secrets
Jan 25 01:21:35.496: INFO: Waiting up to 5m0s for pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b" in namespace "secrets-2749" to be "success or failure"
Jan 25 01:21:35.505: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.051786ms
Jan 25 01:21:37.514: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017876518s
Jan 25 01:21:39.520: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024354345s
Jan 25 01:21:41.531: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034671073s
Jan 25 01:21:43.536: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040320141s
STEP: Saw pod success
Jan 25 01:21:43.536: INFO: Pod "pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b" satisfied condition "success or failure"
Jan 25 01:21:43.540: INFO: Trying to get logs from node jerma-node pod pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b container secret-env-test: 
STEP: delete the pod
Jan 25 01:21:43.661: INFO: Waiting for pod pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b to disappear
Jan 25 01:21:43.681: INFO: Pod pod-secrets-38705769-1e04-45d4-baa5-18aad85f435b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:43.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2749" for this suite.

• [SLOW TEST:8.364 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4368,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:43.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 25 01:21:52.452: INFO: Successfully updated pod "annotationupdatee943def4-f17f-440d-be73-1f8b591d6d3b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:21:55.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-868" for this suite.

• [SLOW TEST:11.740 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4390,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:21:55.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Jan 25 01:21:55.632: INFO: Waiting up to 5m0s for pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78" in namespace "containers-4219" to be "success or failure"
Jan 25 01:21:55.638: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Pending", Reason="", readiness=false. Elapsed: 5.890802ms
Jan 25 01:21:57.645: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013301369s
Jan 25 01:21:59.677: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045046802s
Jan 25 01:22:01.685: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052583819s
Jan 25 01:22:03.706: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073633787s
Jan 25 01:22:05.785: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152717064s
STEP: Saw pod success
Jan 25 01:22:05.785: INFO: Pod "client-containers-75bef990-c31c-484a-8501-249fb8bc2f78" satisfied condition "success or failure"
Jan 25 01:22:05.789: INFO: Trying to get logs from node jerma-node pod client-containers-75bef990-c31c-484a-8501-249fb8bc2f78 container test-container: 
STEP: delete the pod
Jan 25 01:22:06.130: INFO: Waiting for pod client-containers-75bef990-c31c-484a-8501-249fb8bc2f78 to disappear
Jan 25 01:22:06.134: INFO: Pod client-containers-75bef990-c31c-484a-8501-249fb8bc2f78 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:06.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4219" for this suite.

• [SLOW TEST:10.710 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4400,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:06.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Jan 25 01:22:06.325: INFO: Waiting up to 5m0s for pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65" in namespace "var-expansion-9948" to be "success or failure"
Jan 25 01:22:06.496: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65": Phase="Pending", Reason="", readiness=false. Elapsed: 170.697594ms
Jan 25 01:22:08.516: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190986738s
Jan 25 01:22:10.553: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227658639s
Jan 25 01:22:12.576: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250616558s
Jan 25 01:22:14.587: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26185956s
STEP: Saw pod success
Jan 25 01:22:14.587: INFO: Pod "var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65" satisfied condition "success or failure"
Jan 25 01:22:14.592: INFO: Trying to get logs from node jerma-node pod var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65 container dapi-container: 
STEP: delete the pod
Jan 25 01:22:14.686: INFO: Waiting for pod var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65 to disappear
Jan 25 01:22:14.695: INFO: Pod var-expansion-95267a4c-bef2-4d70-9934-9351a76cfc65 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:14.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9948" for this suite.

• [SLOW TEST:8.558 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4411,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:14.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 25 01:22:14.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 25 01:22:15.185: INFO: stderr: ""
Jan 25 01:22:15.185: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:15.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6777" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":271,"skipped":4441,"failed":0}

------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:15.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 25 01:22:15.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4005'
Jan 25 01:22:15.922: INFO: stderr: ""
Jan 25 01:22:15.922: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 25 01:22:15.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4005'
Jan 25 01:22:16.176: INFO: stderr: ""
Jan 25 01:22:16.176: INFO: stdout: "update-demo-nautilus-9n9t2 update-demo-nautilus-r5w22 "
Jan 25 01:22:16.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9n9t2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:16.306: INFO: stderr: ""
Jan 25 01:22:16.306: INFO: stdout: ""
Jan 25 01:22:16.306: INFO: update-demo-nautilus-9n9t2 is created but not running
Jan 25 01:22:21.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4005'
Jan 25 01:22:22.679: INFO: stderr: ""
Jan 25 01:22:22.679: INFO: stdout: "update-demo-nautilus-9n9t2 update-demo-nautilus-r5w22 "
Jan 25 01:22:22.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9n9t2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:23.142: INFO: stderr: ""
Jan 25 01:22:23.142: INFO: stdout: ""
Jan 25 01:22:23.142: INFO: update-demo-nautilus-9n9t2 is created but not running
Jan 25 01:22:28.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4005'
Jan 25 01:22:30.182: INFO: stderr: ""
Jan 25 01:22:30.182: INFO: stdout: "update-demo-nautilus-9n9t2 update-demo-nautilus-r5w22 "
Jan 25 01:22:30.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9n9t2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:30.283: INFO: stderr: ""
Jan 25 01:22:30.283: INFO: stdout: "true"
Jan 25 01:22:30.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9n9t2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:30.473: INFO: stderr: ""
Jan 25 01:22:30.473: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 01:22:30.474: INFO: validating pod update-demo-nautilus-9n9t2
Jan 25 01:22:30.488: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 01:22:30.488: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 01:22:30.488: INFO: update-demo-nautilus-9n9t2 is verified up and running
Jan 25 01:22:30.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r5w22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:30.620: INFO: stderr: ""
Jan 25 01:22:30.620: INFO: stdout: "true"
Jan 25 01:22:30.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r5w22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4005'
Jan 25 01:22:30.742: INFO: stderr: ""
Jan 25 01:22:30.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 25 01:22:30.742: INFO: validating pod update-demo-nautilus-r5w22
Jan 25 01:22:30.749: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 25 01:22:30.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 25 01:22:30.750: INFO: update-demo-nautilus-r5w22 is verified up and running
STEP: using delete to clean up resources
Jan 25 01:22:30.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4005'
Jan 25 01:22:30.881: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 25 01:22:30.881: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 25 01:22:30.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4005'
Jan 25 01:22:31.005: INFO: stderr: "No resources found in kubectl-4005 namespace.\n"
Jan 25 01:22:31.005: INFO: stdout: ""
Jan 25 01:22:31.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4005 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 01:22:31.140: INFO: stderr: ""
Jan 25 01:22:31.141: INFO: stdout: "update-demo-nautilus-9n9t2\nupdate-demo-nautilus-r5w22\n"
Jan 25 01:22:31.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4005'
Jan 25 01:22:32.834: INFO: stderr: "No resources found in kubectl-4005 namespace.\n"
Jan 25 01:22:32.834: INFO: stdout: ""
Jan 25 01:22:32.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4005 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 25 01:22:33.047: INFO: stderr: ""
Jan 25 01:22:33.047: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:33.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4005" for this suite.

• [SLOW TEST:17.855 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":272,"skipped":4441,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:33.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2693569c-331b-4dbe-b50a-c2a613beb7a5
STEP: Creating a pod to test consume configMaps
Jan 25 01:22:33.779: INFO: Waiting up to 5m0s for pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931" in namespace "configmap-1796" to be "success or failure"
Jan 25 01:22:33.849: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Pending", Reason="", readiness=false. Elapsed: 69.203854ms
Jan 25 01:22:35.920: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140550705s
Jan 25 01:22:37.927: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147206339s
Jan 25 01:22:39.934: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154252748s
Jan 25 01:22:41.942: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163148628s
Jan 25 01:22:43.958: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178566136s
STEP: Saw pod success
Jan 25 01:22:43.958: INFO: Pod "pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931" satisfied condition "success or failure"
Jan 25 01:22:43.995: INFO: Trying to get logs from node jerma-node pod pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931 container configmap-volume-test: 
STEP: delete the pod
Jan 25 01:22:44.032: INFO: Waiting for pod pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931 to disappear
Jan 25 01:22:44.047: INFO: Pod pod-configmaps-317abb89-3525-4020-b0c3-05f6daf66931 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:44.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1796" for this suite.

• [SLOW TEST:11.005 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4478,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:44.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1734
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 25 01:22:44.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5266'
Jan 25 01:22:44.400: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 25 01:22:44.400: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1739
Jan 25 01:22:46.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5266'
Jan 25 01:22:46.732: INFO: stderr: ""
Jan 25 01:22:46.732: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:46.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5266" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":274,"skipped":4484,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:46.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:22:46.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 25 01:22:49.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4827 create -f -'
Jan 25 01:22:52.144: INFO: stderr: ""
Jan 25 01:22:52.144: INFO: stdout: "e2e-test-crd-publish-openapi-4225-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 25 01:22:52.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4827 delete e2e-test-crd-publish-openapi-4225-crds test-cr'
Jan 25 01:22:52.428: INFO: stderr: ""
Jan 25 01:22:52.428: INFO: stdout: "e2e-test-crd-publish-openapi-4225-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 25 01:22:52.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4827 apply -f -'
Jan 25 01:22:52.691: INFO: stderr: ""
Jan 25 01:22:52.691: INFO: stdout: "e2e-test-crd-publish-openapi-4225-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 25 01:22:52.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4827 delete e2e-test-crd-publish-openapi-4225-crds test-cr'
Jan 25 01:22:52.785: INFO: stderr: ""
Jan 25 01:22:52.785: INFO: stdout: "e2e-test-crd-publish-openapi-4225-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 25 01:22:52.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4225-crds'
Jan 25 01:22:53.146: INFO: stderr: ""
Jan 25 01:22:53.147: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4225-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:22:56.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4827" for this suite.

• [SLOW TEST:10.052 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":275,"skipped":4508,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:22:56.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 25 01:22:57.374: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 25 01:22:59.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:23:01.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 25 01:23:03.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715512177, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 25 01:23:06.434: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:23:07.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-560" for this suite.
STEP: Destroying namespace "webhook-560-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.617 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":276,"skipped":4508,"failed":0}
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:23:07.417: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 25 01:23:07.578: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 25 01:23:12.622: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 25 01:23:18.645: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 25 01:23:26.737: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-8578 /apis/apps/v1/namespaces/deployment-8578/deployments/test-cleanup-deployment 9411907b-84b6-4000-97a7-5333cea05b8e 4141844 1 2020-01-25 01:23:18 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048e6e18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-25 01:23:18 +0000 UTC,LastTransitionTime:2020-01-25 01:23:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-01-25 01:23:25 +0000 UTC,LastTransitionTime:2020-01-25 01:23:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 25 01:23:26.742: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-8578 /apis/apps/v1/namespaces/deployment-8578/replicasets/test-cleanup-deployment-55ffc6b7b6 356afe34-3c9a-43c1-89a5-daec00900655 4141833 1 2020-01-25 01:23:18 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9411907b-84b6-4000-97a7-5333cea05b8e 0xc0048e72d7 0xc0048e72d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048e7358  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 25 01:23:26.746: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-vfdnv" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-vfdnv test-cleanup-deployment-55ffc6b7b6- deployment-8578 /api/v1/namespaces/deployment-8578/pods/test-cleanup-deployment-55ffc6b7b6-vfdnv 3917aad8-bb09-4073-895f-7d52e1d0b379 4141832 0 2020-01-25 01:23:18 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 356afe34-3c9a-43c1-89a5-daec00900655 0xc004a39f67 0xc004a39f68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6gzdh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6gzdh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6gzdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 01:23:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 01:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 01:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-25 01:23:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-25 01:23:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-25 01:23:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://5cb35b1be09ba355f825cd85380a5e215a6ab425bb1aadcdb098691fd803cb7a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:23:26.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8578" for this suite.

• [SLOW TEST:19.340 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":277,"skipped":4508,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 25 01:23:26.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 25 01:23:49.146: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:49.146: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:49.216074       9 log.go:172] (0xc002bd8370) (0xc0017c9e00) Create stream
I0125 01:23:49.216174       9 log.go:172] (0xc002bd8370) (0xc0017c9e00) Stream added, broadcasting: 1
I0125 01:23:49.219992       9 log.go:172] (0xc002bd8370) Reply frame received for 1
I0125 01:23:49.220022       9 log.go:172] (0xc002bd8370) (0xc002c74780) Create stream
I0125 01:23:49.220032       9 log.go:172] (0xc002bd8370) (0xc002c74780) Stream added, broadcasting: 3
I0125 01:23:49.221690       9 log.go:172] (0xc002bd8370) Reply frame received for 3
I0125 01:23:49.221725       9 log.go:172] (0xc002bd8370) (0xc001c4a8c0) Create stream
I0125 01:23:49.221740       9 log.go:172] (0xc002bd8370) (0xc001c4a8c0) Stream added, broadcasting: 5
I0125 01:23:49.223461       9 log.go:172] (0xc002bd8370) Reply frame received for 5
I0125 01:23:49.302479       9 log.go:172] (0xc002bd8370) Data frame received for 3
I0125 01:23:49.302543       9 log.go:172] (0xc002c74780) (3) Data frame handling
I0125 01:23:49.302580       9 log.go:172] (0xc002c74780) (3) Data frame sent
I0125 01:23:49.372078       9 log.go:172] (0xc002bd8370) (0xc001c4a8c0) Stream removed, broadcasting: 5
I0125 01:23:49.372280       9 log.go:172] (0xc002bd8370) Data frame received for 1
I0125 01:23:49.372293       9 log.go:172] (0xc0017c9e00) (1) Data frame handling
I0125 01:23:49.372311       9 log.go:172] (0xc0017c9e00) (1) Data frame sent
I0125 01:23:49.372460       9 log.go:172] (0xc002bd8370) (0xc0017c9e00) Stream removed, broadcasting: 1
I0125 01:23:49.372601       9 log.go:172] (0xc002bd8370) (0xc002c74780) Stream removed, broadcasting: 3
I0125 01:23:49.372663       9 log.go:172] (0xc002bd8370) Go away received
I0125 01:23:49.372987       9 log.go:172] (0xc002bd8370) (0xc0017c9e00) Stream removed, broadcasting: 1
I0125 01:23:49.373031       9 log.go:172] (0xc002bd8370) (0xc002c74780) Stream removed, broadcasting: 3
I0125 01:23:49.373053       9 log.go:172] (0xc002bd8370) (0xc001c4a8c0) Stream removed, broadcasting: 5
Jan 25 01:23:49.373: INFO: Exec stderr: ""
Jan 25 01:23:49.373: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:49.373: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:49.411619       9 log.go:172] (0xc00197b340) (0xc002421f40) Create stream
I0125 01:23:49.411709       9 log.go:172] (0xc00197b340) (0xc002421f40) Stream added, broadcasting: 1
I0125 01:23:49.414184       9 log.go:172] (0xc00197b340) Reply frame received for 1
I0125 01:23:49.414222       9 log.go:172] (0xc00197b340) (0xc001c4ab40) Create stream
I0125 01:23:49.414231       9 log.go:172] (0xc00197b340) (0xc001c4ab40) Stream added, broadcasting: 3
I0125 01:23:49.415352       9 log.go:172] (0xc00197b340) Reply frame received for 3
I0125 01:23:49.415388       9 log.go:172] (0xc00197b340) (0xc001c4ae60) Create stream
I0125 01:23:49.415399       9 log.go:172] (0xc00197b340) (0xc001c4ae60) Stream added, broadcasting: 5
I0125 01:23:49.420472       9 log.go:172] (0xc00197b340) Reply frame received for 5
I0125 01:23:49.498035       9 log.go:172] (0xc00197b340) Data frame received for 3
I0125 01:23:49.498107       9 log.go:172] (0xc001c4ab40) (3) Data frame handling
I0125 01:23:49.498141       9 log.go:172] (0xc001c4ab40) (3) Data frame sent
I0125 01:23:49.562255       9 log.go:172] (0xc00197b340) Data frame received for 1
I0125 01:23:49.562301       9 log.go:172] (0xc002421f40) (1) Data frame handling
I0125 01:23:49.562320       9 log.go:172] (0xc002421f40) (1) Data frame sent
I0125 01:23:49.562337       9 log.go:172] (0xc00197b340) (0xc001c4ae60) Stream removed, broadcasting: 5
I0125 01:23:49.562421       9 log.go:172] (0xc00197b340) (0xc001c4ab40) Stream removed, broadcasting: 3
I0125 01:23:49.562459       9 log.go:172] (0xc00197b340) (0xc002421f40) Stream removed, broadcasting: 1
I0125 01:23:49.562476       9 log.go:172] (0xc00197b340) Go away received
I0125 01:23:49.562731       9 log.go:172] (0xc00197b340) (0xc002421f40) Stream removed, broadcasting: 1
I0125 01:23:49.562787       9 log.go:172] (0xc00197b340) (0xc001c4ab40) Stream removed, broadcasting: 3
I0125 01:23:49.562826       9 log.go:172] (0xc00197b340) (0xc001c4ae60) Stream removed, broadcasting: 5
Jan 25 01:23:49.562: INFO: Exec stderr: ""
Jan 25 01:23:49.562: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:49.563: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:49.606340       9 log.go:172] (0xc002bd89a0) (0xc000b2a8c0) Create stream
I0125 01:23:49.606436       9 log.go:172] (0xc002bd89a0) (0xc000b2a8c0) Stream added, broadcasting: 1
I0125 01:23:49.609739       9 log.go:172] (0xc002bd89a0) Reply frame received for 1
I0125 01:23:49.609780       9 log.go:172] (0xc002bd89a0) (0xc000b2aa00) Create stream
I0125 01:23:49.609800       9 log.go:172] (0xc002bd89a0) (0xc000b2aa00) Stream added, broadcasting: 3
I0125 01:23:49.611348       9 log.go:172] (0xc002bd89a0) Reply frame received for 3
I0125 01:23:49.611391       9 log.go:172] (0xc002bd89a0) (0xc0017e2000) Create stream
I0125 01:23:49.611403       9 log.go:172] (0xc002bd89a0) (0xc0017e2000) Stream added, broadcasting: 5
I0125 01:23:49.612581       9 log.go:172] (0xc002bd89a0) Reply frame received for 5
I0125 01:23:49.683450       9 log.go:172] (0xc002bd89a0) Data frame received for 3
I0125 01:23:49.683472       9 log.go:172] (0xc000b2aa00) (3) Data frame handling
I0125 01:23:49.683494       9 log.go:172] (0xc000b2aa00) (3) Data frame sent
I0125 01:23:49.756957       9 log.go:172] (0xc002bd89a0) Data frame received for 1
I0125 01:23:49.756984       9 log.go:172] (0xc000b2a8c0) (1) Data frame handling
I0125 01:23:49.757010       9 log.go:172] (0xc000b2a8c0) (1) Data frame sent
I0125 01:23:49.758725       9 log.go:172] (0xc002bd89a0) (0xc0017e2000) Stream removed, broadcasting: 5
I0125 01:23:49.758830       9 log.go:172] (0xc002bd89a0) (0xc000b2a8c0) Stream removed, broadcasting: 1
I0125 01:23:49.759151       9 log.go:172] (0xc002bd89a0) (0xc000b2aa00) Stream removed, broadcasting: 3
I0125 01:23:49.759197       9 log.go:172] (0xc002bd89a0) Go away received
I0125 01:23:49.759349       9 log.go:172] (0xc002bd89a0) (0xc000b2a8c0) Stream removed, broadcasting: 1
I0125 01:23:49.759403       9 log.go:172] (0xc002bd89a0) (0xc000b2aa00) Stream removed, broadcasting: 3
I0125 01:23:49.759420       9 log.go:172] (0xc002bd89a0) (0xc0017e2000) Stream removed, broadcasting: 5
Jan 25 01:23:49.759: INFO: Exec stderr: ""
Jan 25 01:23:49.759: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:49.759: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:49.825863       9 log.go:172] (0xc001a3c840) (0xc002c755e0) Create stream
I0125 01:23:49.825928       9 log.go:172] (0xc001a3c840) (0xc002c755e0) Stream added, broadcasting: 1
I0125 01:23:49.829995       9 log.go:172] (0xc001a3c840) Reply frame received for 1
I0125 01:23:49.830080       9 log.go:172] (0xc001a3c840) (0xc001c4afa0) Create stream
I0125 01:23:49.830102       9 log.go:172] (0xc001a3c840) (0xc001c4afa0) Stream added, broadcasting: 3
I0125 01:23:49.832867       9 log.go:172] (0xc001a3c840) Reply frame received for 3
I0125 01:23:49.832906       9 log.go:172] (0xc001a3c840) (0xc000b2ab40) Create stream
I0125 01:23:49.832912       9 log.go:172] (0xc001a3c840) (0xc000b2ab40) Stream added, broadcasting: 5
I0125 01:23:49.834908       9 log.go:172] (0xc001a3c840) Reply frame received for 5
I0125 01:23:49.912502       9 log.go:172] (0xc001a3c840) Data frame received for 3
I0125 01:23:49.912565       9 log.go:172] (0xc001c4afa0) (3) Data frame handling
I0125 01:23:49.912606       9 log.go:172] (0xc001c4afa0) (3) Data frame sent
I0125 01:23:49.976971       9 log.go:172] (0xc001a3c840) (0xc001c4afa0) Stream removed, broadcasting: 3
I0125 01:23:49.977100       9 log.go:172] (0xc001a3c840) (0xc000b2ab40) Stream removed, broadcasting: 5
I0125 01:23:49.977274       9 log.go:172] (0xc001a3c840) Data frame received for 1
I0125 01:23:49.977339       9 log.go:172] (0xc002c755e0) (1) Data frame handling
I0125 01:23:49.977404       9 log.go:172] (0xc002c755e0) (1) Data frame sent
I0125 01:23:49.977455       9 log.go:172] (0xc001a3c840) (0xc002c755e0) Stream removed, broadcasting: 1
I0125 01:23:49.977539       9 log.go:172] (0xc001a3c840) Go away received
I0125 01:23:49.978428       9 log.go:172] (0xc001a3c840) (0xc002c755e0) Stream removed, broadcasting: 1
I0125 01:23:49.978457       9 log.go:172] (0xc001a3c840) (0xc001c4afa0) Stream removed, broadcasting: 3
I0125 01:23:49.978478       9 log.go:172] (0xc001a3c840) (0xc000b2ab40) Stream removed, broadcasting: 5
Jan 25 01:23:49.978: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 25 01:23:49.978: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:49.978: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:50.025112       9 log.go:172] (0xc002bd8fd0) (0xc000b2b180) Create stream
I0125 01:23:50.025192       9 log.go:172] (0xc002bd8fd0) (0xc000b2b180) Stream added, broadcasting: 1
I0125 01:23:50.028469       9 log.go:172] (0xc002bd8fd0) Reply frame received for 1
I0125 01:23:50.028502       9 log.go:172] (0xc002bd8fd0) (0xc0017e20a0) Create stream
I0125 01:23:50.028511       9 log.go:172] (0xc002bd8fd0) (0xc0017e20a0) Stream added, broadcasting: 3
I0125 01:23:50.029472       9 log.go:172] (0xc002bd8fd0) Reply frame received for 3
I0125 01:23:50.029489       9 log.go:172] (0xc002bd8fd0) (0xc0017e2140) Create stream
I0125 01:23:50.029495       9 log.go:172] (0xc002bd8fd0) (0xc0017e2140) Stream added, broadcasting: 5
I0125 01:23:50.030588       9 log.go:172] (0xc002bd8fd0) Reply frame received for 5
I0125 01:23:50.107291       9 log.go:172] (0xc002bd8fd0) Data frame received for 3
I0125 01:23:50.107392       9 log.go:172] (0xc0017e20a0) (3) Data frame handling
I0125 01:23:50.107439       9 log.go:172] (0xc0017e20a0) (3) Data frame sent
I0125 01:23:50.172772       9 log.go:172] (0xc002bd8fd0) Data frame received for 1
I0125 01:23:50.172837       9 log.go:172] (0xc002bd8fd0) (0xc0017e2140) Stream removed, broadcasting: 5
I0125 01:23:50.172908       9 log.go:172] (0xc000b2b180) (1) Data frame handling
I0125 01:23:50.172933       9 log.go:172] (0xc000b2b180) (1) Data frame sent
I0125 01:23:50.172965       9 log.go:172] (0xc002bd8fd0) (0xc000b2b180) Stream removed, broadcasting: 1
I0125 01:23:50.173131       9 log.go:172] (0xc002bd8fd0) (0xc0017e20a0) Stream removed, broadcasting: 3
I0125 01:23:50.173162       9 log.go:172] (0xc002bd8fd0) Go away received
I0125 01:23:50.173568       9 log.go:172] (0xc002bd8fd0) (0xc000b2b180) Stream removed, broadcasting: 1
I0125 01:23:50.173626       9 log.go:172] (0xc002bd8fd0) (0xc0017e20a0) Stream removed, broadcasting: 3
I0125 01:23:50.173642       9 log.go:172] (0xc002bd8fd0) (0xc0017e2140) Stream removed, broadcasting: 5
Jan 25 01:23:50.173: INFO: Exec stderr: ""
Jan 25 01:23:50.173: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:50.173: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:50.218696       9 log.go:172] (0xc002bd9550) (0xc000b2b400) Create stream
I0125 01:23:50.218775       9 log.go:172] (0xc002bd9550) (0xc000b2b400) Stream added, broadcasting: 1
I0125 01:23:50.223927       9 log.go:172] (0xc002bd9550) Reply frame received for 1
I0125 01:23:50.223998       9 log.go:172] (0xc002bd9550) (0xc002c75680) Create stream
I0125 01:23:50.224012       9 log.go:172] (0xc002bd9550) (0xc002c75680) Stream added, broadcasting: 3
I0125 01:23:50.226092       9 log.go:172] (0xc002bd9550) Reply frame received for 3
I0125 01:23:50.226121       9 log.go:172] (0xc002bd9550) (0xc00122e000) Create stream
I0125 01:23:50.226132       9 log.go:172] (0xc002bd9550) (0xc00122e000) Stream added, broadcasting: 5
I0125 01:23:50.228213       9 log.go:172] (0xc002bd9550) Reply frame received for 5
I0125 01:23:50.323843       9 log.go:172] (0xc002bd9550) Data frame received for 3
I0125 01:23:50.323926       9 log.go:172] (0xc002c75680) (3) Data frame handling
I0125 01:23:50.323960       9 log.go:172] (0xc002c75680) (3) Data frame sent
I0125 01:23:50.438308       9 log.go:172] (0xc002bd9550) (0xc002c75680) Stream removed, broadcasting: 3
I0125 01:23:50.438898       9 log.go:172] (0xc002bd9550) Data frame received for 1
I0125 01:23:50.438972       9 log.go:172] (0xc000b2b400) (1) Data frame handling
I0125 01:23:50.439036       9 log.go:172] (0xc000b2b400) (1) Data frame sent
I0125 01:23:50.439102       9 log.go:172] (0xc002bd9550) (0xc00122e000) Stream removed, broadcasting: 5
I0125 01:23:50.439179       9 log.go:172] (0xc002bd9550) (0xc000b2b400) Stream removed, broadcasting: 1
I0125 01:23:50.439240       9 log.go:172] (0xc002bd9550) Go away received
I0125 01:23:50.439827       9 log.go:172] (0xc002bd9550) (0xc000b2b400) Stream removed, broadcasting: 1
I0125 01:23:50.439912       9 log.go:172] (0xc002bd9550) (0xc002c75680) Stream removed, broadcasting: 3
I0125 01:23:50.439970       9 log.go:172] (0xc002bd9550) (0xc00122e000) Stream removed, broadcasting: 5
Jan 25 01:23:50.440: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 25 01:23:50.440: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:50.440: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:50.497487       9 log.go:172] (0xc002bd9b80) (0xc0002c59a0) Create stream
I0125 01:23:50.497636       9 log.go:172] (0xc002bd9b80) (0xc0002c59a0) Stream added, broadcasting: 1
I0125 01:23:50.504903       9 log.go:172] (0xc002bd9b80) Reply frame received for 1
I0125 01:23:50.504950       9 log.go:172] (0xc002bd9b80) (0xc002c75720) Create stream
I0125 01:23:50.504962       9 log.go:172] (0xc002bd9b80) (0xc002c75720) Stream added, broadcasting: 3
I0125 01:23:50.507540       9 log.go:172] (0xc002bd9b80) Reply frame received for 3
I0125 01:23:50.507569       9 log.go:172] (0xc002bd9b80) (0xc001c4b180) Create stream
I0125 01:23:50.507577       9 log.go:172] (0xc002bd9b80) (0xc001c4b180) Stream added, broadcasting: 5
I0125 01:23:50.509281       9 log.go:172] (0xc002bd9b80) Reply frame received for 5
I0125 01:23:50.597506       9 log.go:172] (0xc002bd9b80) Data frame received for 3
I0125 01:23:50.597748       9 log.go:172] (0xc002c75720) (3) Data frame handling
I0125 01:23:50.597850       9 log.go:172] (0xc002c75720) (3) Data frame sent
I0125 01:23:50.749013       9 log.go:172] (0xc002bd9b80) (0xc002c75720) Stream removed, broadcasting: 3
I0125 01:23:50.749240       9 log.go:172] (0xc002bd9b80) Data frame received for 1
I0125 01:23:50.749260       9 log.go:172] (0xc0002c59a0) (1) Data frame handling
I0125 01:23:50.749277       9 log.go:172] (0xc0002c59a0) (1) Data frame sent
I0125 01:23:50.749285       9 log.go:172] (0xc002bd9b80) (0xc0002c59a0) Stream removed, broadcasting: 1
I0125 01:23:50.749365       9 log.go:172] (0xc002bd9b80) (0xc001c4b180) Stream removed, broadcasting: 5
I0125 01:23:50.749524       9 log.go:172] (0xc002bd9b80) Go away received
I0125 01:23:50.749881       9 log.go:172] (0xc002bd9b80) (0xc0002c59a0) Stream removed, broadcasting: 1
I0125 01:23:50.749906       9 log.go:172] (0xc002bd9b80) (0xc002c75720) Stream removed, broadcasting: 3
I0125 01:23:50.749913       9 log.go:172] (0xc002bd9b80) (0xc001c4b180) Stream removed, broadcasting: 5
Jan 25 01:23:50.749: INFO: Exec stderr: ""
Jan 25 01:23:50.750: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:50.750: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:50.817918       9 log.go:172] (0xc003afe210) (0xc0002c5ea0) Create stream
I0125 01:23:50.818464       9 log.go:172] (0xc003afe210) (0xc0002c5ea0) Stream added, broadcasting: 1
I0125 01:23:50.829509       9 log.go:172] (0xc003afe210) Reply frame received for 1
I0125 01:23:50.829691       9 log.go:172] (0xc003afe210) (0xc00122e1e0) Create stream
I0125 01:23:50.829715       9 log.go:172] (0xc003afe210) (0xc00122e1e0) Stream added, broadcasting: 3
I0125 01:23:50.832246       9 log.go:172] (0xc003afe210) Reply frame received for 3
I0125 01:23:50.832294       9 log.go:172] (0xc003afe210) (0xc000316000) Create stream
I0125 01:23:50.832306       9 log.go:172] (0xc003afe210) (0xc000316000) Stream added, broadcasting: 5
I0125 01:23:50.833732       9 log.go:172] (0xc003afe210) Reply frame received for 5
I0125 01:23:50.938833       9 log.go:172] (0xc003afe210) Data frame received for 3
I0125 01:23:50.938901       9 log.go:172] (0xc00122e1e0) (3) Data frame handling
I0125 01:23:50.938922       9 log.go:172] (0xc00122e1e0) (3) Data frame sent
I0125 01:23:51.011586       9 log.go:172] (0xc003afe210) (0xc00122e1e0) Stream removed, broadcasting: 3
I0125 01:23:51.011865       9 log.go:172] (0xc003afe210) Data frame received for 1
I0125 01:23:51.011908       9 log.go:172] (0xc0002c5ea0) (1) Data frame handling
I0125 01:23:51.011925       9 log.go:172] (0xc0002c5ea0) (1) Data frame sent
I0125 01:23:51.011970       9 log.go:172] (0xc003afe210) (0xc000316000) Stream removed, broadcasting: 5
I0125 01:23:51.012022       9 log.go:172] (0xc003afe210) (0xc0002c5ea0) Stream removed, broadcasting: 1
I0125 01:23:51.012077       9 log.go:172] (0xc003afe210) Go away received
I0125 01:23:51.012312       9 log.go:172] (0xc003afe210) (0xc0002c5ea0) Stream removed, broadcasting: 1
I0125 01:23:51.012328       9 log.go:172] (0xc003afe210) (0xc00122e1e0) Stream removed, broadcasting: 3
I0125 01:23:51.012342       9 log.go:172] (0xc003afe210) (0xc000316000) Stream removed, broadcasting: 5
Jan 25 01:23:51.012: INFO: Exec stderr: ""
Jan 25 01:23:51.012: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:51.012: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:51.063616       9 log.go:172] (0xc003afe840) (0xc000542820) Create stream
I0125 01:23:51.063767       9 log.go:172] (0xc003afe840) (0xc000542820) Stream added, broadcasting: 1
I0125 01:23:51.076583       9 log.go:172] (0xc003afe840) Reply frame received for 1
I0125 01:23:51.076773       9 log.go:172] (0xc003afe840) (0xc002c757c0) Create stream
I0125 01:23:51.076802       9 log.go:172] (0xc003afe840) (0xc002c757c0) Stream added, broadcasting: 3
I0125 01:23:51.079011       9 log.go:172] (0xc003afe840) Reply frame received for 3
I0125 01:23:51.079037       9 log.go:172] (0xc003afe840) (0xc000317f40) Create stream
I0125 01:23:51.079046       9 log.go:172] (0xc003afe840) (0xc000317f40) Stream added, broadcasting: 5
I0125 01:23:51.081455       9 log.go:172] (0xc003afe840) Reply frame received for 5
I0125 01:23:51.136192       9 log.go:172] (0xc003afe840) Data frame received for 3
I0125 01:23:51.136226       9 log.go:172] (0xc002c757c0) (3) Data frame handling
I0125 01:23:51.136240       9 log.go:172] (0xc002c757c0) (3) Data frame sent
I0125 01:23:51.195053       9 log.go:172] (0xc003afe840) Data frame received for 1
I0125 01:23:51.195097       9 log.go:172] (0xc003afe840) (0xc000317f40) Stream removed, broadcasting: 5
I0125 01:23:51.195129       9 log.go:172] (0xc000542820) (1) Data frame handling
I0125 01:23:51.195141       9 log.go:172] (0xc000542820) (1) Data frame sent
I0125 01:23:51.195154       9 log.go:172] (0xc003afe840) (0xc002c757c0) Stream removed, broadcasting: 3
I0125 01:23:51.195172       9 log.go:172] (0xc003afe840) (0xc000542820) Stream removed, broadcasting: 1
I0125 01:23:51.195281       9 log.go:172] (0xc003afe840) (0xc000542820) Stream removed, broadcasting: 1
I0125 01:23:51.195290       9 log.go:172] (0xc003afe840) (0xc002c757c0) Stream removed, broadcasting: 3
I0125 01:23:51.195297       9 log.go:172] (0xc003afe840) (0xc000317f40) Stream removed, broadcasting: 5
Jan 25 01:23:51.195: INFO: Exec stderr: ""
Jan 25 01:23:51.195: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5244 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 25 01:23:51.195: INFO: >>> kubeConfig: /root/.kube/config
I0125 01:23:51.195832       9 log.go:172] (0xc003afe840) Go away received
I0125 01:23:51.242625       9 log.go:172] (0xc003afebb0) (0xc0005435e0) Create stream
I0125 01:23:51.242724       9 log.go:172] (0xc003afebb0) (0xc0005435e0) Stream added, broadcasting: 1
I0125 01:23:51.246358       9 log.go:172] (0xc003afebb0) Reply frame received for 1
I0125 01:23:51.246387       9 log.go:172] (0xc003afebb0) (0xc00122e460) Create stream
I0125 01:23:51.246395       9 log.go:172] (0xc003afebb0) (0xc00122e460) Stream added, broadcasting: 3
I0125 01:23:51.248128       9 log.go:172] (0xc003afebb0) Reply frame received for 3
I0125 01:23:51.248208       9 log.go:172] (0xc003afebb0) (0xc00122e500) Create stream
I0125 01:23:51.248218       9 log.go:172] (0xc003afebb0) (0xc00122e500) Stream added, broadcasting: 5
I0125 01:23:51.250858       9 log.go:172] (0xc003afebb0) Reply frame received for 5
I0125 01:23:51.312115       9 log.go:172] (0xc003afebb0) Data frame received for 3
I0125 01:23:51.312188       9 log.go:172] (0xc00122e460) (3) Data frame handling
I0125 01:23:51.312223       9 log.go:172] (0xc00122e460) (3) Data frame sent
I0125 01:23:51.368725       9 log.go:172] (0xc003afebb0) Data frame received for 1
I0125 01:23:51.368765       9 log.go:172] (0xc0005435e0) (1) Data frame handling
I0125 01:23:51.368777       9 log.go:172] (0xc0005435e0) (1) Data frame sent
I0125 01:23:51.369278       9 log.go:172] (0xc003afebb0) (0xc0005435e0) Stream removed, broadcasting: 1
I0125 01:23:51.369622       9 log.go:172] (0xc003afebb0) (0xc00122e460) Stream removed, broadcasting: 3
I0125 01:23:51.370146       9 log.go:172] (0xc003afebb0) (0xc00122e500) Stream removed, broadcasting: 5
I0125 01:23:51.370239       9 log.go:172] (0xc003afebb0) Go away received
I0125 01:23:51.370279       9 log.go:172] (0xc003afebb0) (0xc0005435e0) Stream removed, broadcasting: 1
I0125 01:23:51.370290       9 log.go:172] (0xc003afebb0) (0xc00122e460) Stream removed, broadcasting: 3
I0125 01:23:51.370299       9 log.go:172] (0xc003afebb0) (0xc00122e500) Stream removed, broadcasting: 5
Jan 25 01:23:51.370: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 25 01:23:51.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5244" for this suite.

• [SLOW TEST:24.627 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4550,"failed":0}
SSSSSSSSSSSSSJan 25 01:23:51.389: INFO: Running AfterSuite actions on all nodes
Jan 25 01:23:51.389: INFO: Running AfterSuite actions on node 1
Jan 25 01:23:51.389: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4563,"failed":0}

Ran 278 of 4841 Specs in 6294.148 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4563 Skipped
PASS